Computational Imaging for Art and Cultural Heritage

Computational imaging is a powerful tool for art and cultural heritage preservation and analysis, providing detailed, non-invasive and non-destructive documentation of paintings, sculptures, and even delicate objects like stained glass artworks or analog holograms, which are particularly prone to degradation. High-resolution digital (3D) models reveal surface details, structural features, and signs of wear, supporting condition monitoring and virtual restoration without risking damage.

In several of our projects we have integrated these features in portable, hand-guided 3D imaging devices, which often consist only of off-the-shelf technology like ordinary mobile phones or tablets. This makes these methods more accessible to a wider user community and institutions.

High-quality 3D imaging with Commodity Devices

This research track introduces a series of systems that only require commodity devices such as screens, (web-) cameras, low-end tablets or mobile phones to capture high-quality 3D data: The developed “Mobile Multiview Deflectometry” system exploits screen and front camera of mobile devices for deflectometry-based measurements. It works without the need for a calibration and is optimized for specular surfaces such as stained glass artworks or artworks with metallic mirror-like surfaces. To compensate for the small screen, a multi-view registration technique is applied so that large surfaces can be densely reconstructed in their entirety. The “SkinScan” sensor principle uses the same hardware components but exploits photometric stereo-inspired algorithms for the measurement of matte object surfaces such as paintings or human skin. The project is a first step towards a universal self-calibrating measurement procedure usable by a broad audience with little to no technical imaging experience.

Selected Publications
Archiving the visual contents of analog film holograms using light field rendering and neural radiance fields (NeRF)

An analog hologram can record the complete three-dimensional light field of an object on a simple 2D film. Even more than 70 years after the invention of holography, watching an analog film hologram still seems like pure magic. The probably most fascinating feature is the seemingly perfect reconstruction of the recorded object that can be viewed from every possible perspective. Being an “active material,” however, analog film holograms are prone to continuous degradation. This raises the question of how to “digitally preserve” the unique experience of viewing them for future generations, ideally without capturing terabytes of data.

In this research track, we develop procedures to render the visual content of analog film holograms from sparse image data, which can be captured in seconds using off-the-shelf devices like mobile phones. Our approaches leverage light field rendering and Neural Radiance Fields (NeRF), a learning-based method for generating new views of complex volumetric scenes. For the latter, both our qualitative and quantitative experiments demonstrate that NeRF can generate scenes from novel viewpoints, even for captured analog holograms, and thus can digitally preserve them without complicated optical setups.

Selected Publications
Hand-Guided Single-Shot Multi-Line Triangulation

(Multi-) line triangulation systems deliver a nearly perfect 3D profile of the measured part for each projected line in single-shot. However, the number of projected lines is limited by physical and information theoretical restrictions, leading to low data densities. This research track introduces novel concepts to increase the data density of multi-line triangulation up to the physical and information-theoretical limit, leading to accurate and dense 3D models. Each model is captured in single-shot, which allows for “3D-videos” of fast-moving objects or motion-robust free hand guided measurements with a handheld device.

In a related concept dubbed “Flying Triangulation”, we pair a “sparse” single-shot line triangulation sensor projecting ~10 straight narrow lines with sophisticated real-time registration algorithms. The captured sparse 3D line profiles are registered to each other ‘on-the-fly’ while the sensor is free-hand guided around the object, or the object is moved in front of the sensor (see videos). The result is a dense 3D model of the object with high depth precision.

Visit the Osmin3D YouTube channel for more Videos.

Color 3D Movie of a talking face - RAW data (no post processing)

Color 3D Movie of another talking face - RAW data (no post processing)

Real-time 3D movie of a boncing ping-pong ball - RAW data (no post processing)

Real-time 3D movie of a folded paper - RAW data (no post processing). High object frequencies are preserved

How to watch a ‘3D movie’

3D movie of a talking face with unidirectional lines plus closeup - RAW data (no post processing).

3D movie of a talking face with unidirectional lines - RAW data (no post processing).

Flying Triangulation Dental Scanner

Flying Triangulation Face Scanner

3D models measured with Flying Triangulation (no post processing).

360° scan of a plaster bust.

Interview Flying Triangulation (March 2013)
German with english subtitles

Selected Publications