Research Projects

Spectral Capture
Conventional and digital photography relies on the principle of trichromacy where the eye has three color-sensors with sensitivities in the long- (L), medium- (M), and short-wavelength (S) regions of the visible spectrum. For a specific illumination and viewing condition, the trichromatic signal defines and object’s color. Color spaces such as CIEXYZ and CIELAB are two standardized trichromatic systems. Cameras also have sensitivities in these three general spectral regions, referred to as RGB. Through color management, RGB can be transformed to approximations of CIEXYZ, CIELAB, or other standardized system such as wide-gamut RGB. All trichromatic systems are limited to a single illumination and viewing conditions, known as metameric matching. Furthermore, the camera sensitivities may not lead to accurate estimates of standardized color.

Spectral imaging, in which optical data as a function of wavelength are captured, eliminates the limitations of RGB imaging. Having spectral data enables accurate color encoding and the ability to calculate color for any illuminating and viewing condition. One decided advantage of this approach is the elimination of visual editing from the imaging workflow, greatly improving the efficiency of creating an image archive.

Spectral Printing
Ideally, we would like prints, whether in books or as posters, that match a work of art under any condition of illuminating and viewing. Imagine placing the poster adjacent to the art and having the colors match. If we change the lighting, the match persists. In this way, the poster is a surrogate painting. This active area of research has included spectral color management, multi-ink color separation algorithms, and ink design. We are in the process of creating a prototype end-to-end color reproduction system including spectral-based capture, archiving, and printing using multi-colorant inkjet printers.

Inpainting (retouching)
Many works of art experience paint losses. In some cases, these losses are quite noticeable and distracting. One conservation philosophy is to inpaint (also known as retouch) these paint losses to unify the appearance. Special materials and techniques are used so that inpainted areas can be easily removed; that is, the treatment is reversible. For regions with little image content, for example skies and modern color-field paintings, selecting retouch paints that match the surrounding area for many conditions of illumination and viewing is critical. Pigment mixtures are sought that result in minimal metamerism. We have performed research to develop techniques that can by used by the conservation community where only spreadsheet software and a small-aperture reflection spectrophotometer are required. These techniques are simplifications of the methods used at a paint store or in industry to determine colorants and their amounts (recipe) that match a standard.

Pigment Mapping
One of the “holy grails” of imaging artwork is to separate a work of art into its constituent materials. These would be image maps for each material where gray scale is used to represent amounts. In essence, a high-dimensional spectral image is transformed to an n-dimension image where n counts the number of materials. We have made progress in this area by combining our spectral imaging and inpainting research along with image classification techniques.

Fluorescence Imaging
The amount of fluorescent emission depends on the spectral power distribution of the source of illumination. As a consequence, for conditions when the camera-taking illumination does not match the color-encoding illumination (e.g., CIE illuminant D50 for printing), the resulting image will have significant color error. Unmatched taking and encoding illuminants is the usual occurrence. We have a research program to image fluorescent artwork that minimizes this problem. It is based on the optimal method of measuring fluorescent materials, using a bi-spectrometer where one monochromator is placed between the light source and the object and a second monochromator is placed between the object and the detector. In our approach the first monochromator is replaced with a set of so-called fluorescence weakening and killing filters and the second monochromator is replaced with our multi-channel camera.

Size Effects
Suppose we have a digital image with accurate color encoding, equivalent to producing a one-to-one reproduction and having it match the work of art when viewed side-by-side. When this image is reduced in size, e.g., in a book or catalog, it no longer matches the appearance of the art. Fundamental vision research was performed to better understand the reason for this mismatch. Psychophysics was carried out that measured a set of observers’ luminance and color contrast sensitivities as a function of image size at different spatial frequencies and contrasts. A high-resolution LCD and DMD projector were used to generate the stimuli. Algorithms were developed based on these results and used to improve the color matching of a set of test images of paintings.

3-D imaging
Cultural heritage is most commonly accessed in two ways: in real life or as a reproduction in print or display. The latter has limited realism since it reduces the observer’s interactive experience to one pre-defined by a photographer. That is, the complex interplay between the lighting, work of art, and observer has been condensed to a single image based on a photographer’s subjective decisions. 

The purpose of this research project is to develop a practical methodology for imaging cultural heritage that is not limited to a single subjective image. Instead, it will be a comprehensive record of the object’s optical properties. This requires first measuring the geometric and spectral properties of the art using an imaging gonio-spectrophotometer, an instrument where a light source and spectral-based camera are moved independently in three-dimensional space around the object to capture data known as the bidirectional reflectance distribution function, or BRDF. To render an object realistically an instrument that can measure its shape and the 3-D properties of its environment is also required. From these measurements, mathematical models from the domain of computer graphics are used to render for an unlimited set of viewing experiences. These can be presented interactively using computer-controlled displays or statically where different renderings are created for purposes such as documentation, publication, conservation, and scholarship.

This research project will produce a measurement system appropriate for use in a museum-imaging studio that captures sufficient spectral, geometric, and shape information to enable the realistic rendering of paintings and drawings over typical viewing geometries. The program is divided into two phases. We are currently in Phase 1.

Phase 1: Construct and Optimize a Practical BRDF System
Stage 1-A: Instrument Development
Stage 1-B: Data Collection of Objects
Stage 1-C: Implementing Rendering Algorithms
Stage 1-D: Model Evaluation – Physics
Stage 1-E: Model Evaluation – Psychophysics

Phase 2: 3-D Scanning and Reduction to Practice
Stage 2-A: Laser Scanner Acquisition and Incorporation
Stage 2-B: Defining User Needs
Stage 2-C: Data Collection of Museum Lighting Environments
Stage 2-D: Practical Implementation
Stage 2-E: System Verification

 

©2001-2008 Rochester Institute of Technology. All Rights Reserved.