michael goesele's homepage

Graphics, Capture and Massively Parallel Computing (GCC) | GRIS | Fachbereich Informatik (FB20) | TU Darmstadt

michael goesele's homepage

Projects

This is a selection of projects I was involved in. Please refer to the publications page for a full list of publications covering all my work in computer graphics and computer vision.


Ambient Point Clouds for View Interpolation

View interpolation and image-based rendering algorithms often produce visual artifacts in regions where the 3D scene geometry is erroneous, uncertain, or incomplete. We introduce ambient point clouds constructed from colored pixels with uncertain depth, which help reduce these artifacts while providing non-photorealistic background coloring and emphasizing reconstructed 3D geometry. Ambient point clouds are created by randomly sampling colored points along the viewing rays associated with uncertain pixels. Our real-time rendering system combines these with more traditional rigid 3D point clouds and colored surface meshes obtained using multi-view stereo. Our resulting system can handle larger-range view transitions with fewer visible artifacts than previous approaches.

More information ...


Relighting Objects from Image Collections

We present an approach for recovering the reflectance of a static scene with known geometry from a collection of images taken under distant, unknown illumination. In contrast to previous work, we allow the illumination to vary between the images, which greatly increases the applicability of the approach. Using an all-frequency relighting framework based on wavelets, we are able to simultaneously estimate the per-image incident illumination and the per- surface point reflectance. The wavelet framework allows for incorporating various reflection models. We demonstrate the quality of our results for synthetic test cases as well as for several datasets captured under laboratory conditions. Combined with multi-view stereo reconstruction, we are even able to recover the geometry and reflectance of a scene solely using images collected from the Internet.

Relighting Objects from Image Collections
Tom Haber, Christian Fuchs, Philippe Bekaert, Hans-Peter Seidel, Michael Goesele, Hendrik P.A. Lensch
In: Proceedings of CVPR 2009, Miami Beach, USA, June 20-25, 2009.

Input image and rendering of the reconstructed templeFull dataset

High-Quality Rendering of Varying Isosurfaces with Cubic Trivariate C1-continuous Splines

Smooth trivariate splines on uniform tetrahedral partitions are well suited for high-quality visualization of isosurfaces from scalar volumetric data. We propose a novel rendering approach based on spline patches with low total degree, for which ray-isosurface intersections are computed using effcient root finding algorithms. Smoothly varying surface normals are directly extracted from the underlying spline representation. Our approach is using a combined CUDA and graphics pipeline and yields two key advantages over previous work. First, we can interactively vary the isovalues since all required processing steps are performed on the GPU. Second, we employ instancing in order to reduce shader complexity and to minimize overall memory usage. In particular, this allows to compute the spline coeffcients on-the-fly in real-time on the GPU.

High-Quality Rendering of Varying Isosurfaces with Cubic Trivariate C1-continuous Splines
Thomas Kalbe, Thomas Koch, Michael Goesele
In: Proceedings of 5th International Symposium on Visual Computing (ISVC 2009), Las Vegas, USA, November 30 - December 2, 2009.

More information ...

Input image and rendering of the reconstructed templeFull dataset

Massively-Parallel Simulation of Biochemical Systems

Understanding biological evolution prompts for a detailed understanding of the realized phenotype. Biochemical and gene regulatory dynamics are a cornerstone for the physiology of the cell and must therefore be regarded as one of the major aspects of such a phenotype. Experimental insight into molecular parameters is, however, hard to come by. Model development therefore requires computational parameter estimation. At the same time, design of cellular dynamics is highly efficient when done in-silico. We therefore developed a computational approach to allow for massively parallel simulation of biological molecular networks that leverage the massively-parallel computing power of modern graphics cards and other many-core programming paradigms. Our system can automatically compile standard SBML files into CUDA code, using analytic derivatives, and computing standard measures of complex dynamics like the Lyapunov exponent.

Massively-Parallel Simulation of Biochemical Systems
Jens Ackermann, Paul Baecher, Thorsten Franzel, Michael Goesele, Kay Hamacher
In: Proceedings of Massively Parallel Computational Biology on GPUs, Jahrestagung der Gesellschaft für Informatik e.V., Lübeck, Germany, September 29, 2009.

More information ...

Input image and rendering of the reconstructed templeFull dataset

Multi-View Stereo for Community Photo Collections

We present a multi-view stereo algorithm that addresses the extreme changes in lighting, scale, clutter, and other effects in large online community photo collections. Our idea is to intelligently choose images to match, both at a per-view and per-pixel level. We show that such adaptive view selection enables robust performance even with dramatic appearance variability. The stereo matching technique takes as input sparse 3D points reconstructed from structure-from-motion methods and iteratively grows surfaces from these points. Optimizing for surface normals within a photoconsistency measure significantly improves the matching results. While the focus of our approach is to estimate high-quality depth maps, we also show examples of merging the resulting depth maps into compelling scene reconstructions. We demonstrate our algorithm on standard multi-view stereo datasets and on casually acquired photo collections of famous scenes gathered from the Internet..

Multi-View Stereo for Community Photo Collections
Michael Goesele, Noah Snavely, Brian Curless, Hugues Hoppe, Steven M. Seitz
In: Proceedings of ICCV 2007, Rio de Janeiro, Brasil, October 14-20, 2007.

More Information is available at the project page and at our community photo collections (CPC) page.

Input image and rendering of the reconstructed templeFull dataset

Multi-View Stereo Revisited

We present an extremely simple yet robust multi-view stereo algorithm and analyze its properties. The algorithm first computes individual depth maps using a window-based voting approach that returns only good matches. The depth maps are then merged into a single mesh using a straightforward volumetric approach. We show results for several datasets, showing accuracy comparable to the best of the current state of the art techniques and rivaling more complex algorithms.

Multi-View Stereo Revisited
Michael Goesele, Steven M. Seitz, and Brian Curless
In: Proceedings of CVPR 2006, New York, New York, USA, June 17-22, 2006.

The reconstructed geometry models have been submitted to the Multi-View Stereo Evaluation. See the latest results for a comparison to other current approaches.

Input image and rendering of the reconstructed templeFull dataset  

Mesostructure from Specularity

We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard them, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects with a wide variety of reflection properties, including translucent, low albedo, and highly specular objects. We show results for a variety of objects including skin, apricot, orange, jelly candy, black leather and dark chocolate.

Mesostructure from Specularity
Tongbo Chen, Michael Goesele, and Hans-Peter Seidel
In:Proceedings of CVPR 2006, New York, New York, USA, June 17-22, 2006.

More information is available on Tongbo Chen's project page at the MPI Informatik.

Mesostructure of orange peel

Volumetric Density Capture From a Single Image

We propose a new approach to capture the volumetric density of scattering media instantaneously with a single image. The volume is probed with a set of laser lines and the scattered intensity is recorded by a conventional camera. We then determine the density along the laser lines taking the scattering properties of the media into account. A specialized interpolation technique reconstructs the full density field in the volume. We apply the technique to capture the volumetric density of participating media such as smoke.

Density Estimation for Dynamic Volumes
Christian Fuchs, Tongbo Chen, Michael Goesele, Holger Theisel and Hans-Peter Seidel
computers & graphics, 31, 2, 2007, to appear.

Volumetric Density Capture From a Single Image
Christian Fuchs, Tongbo Chen, Michael Goesele, Holger Theisel, Hans-Peter Seidel
In:Proceedings of the International Workshop on Volume Graphics 2006, Boston, Massachusetts, USA, July 30-31, 2006.

Input image to capture volumetric density  

DISCO - Acquisition of Translucent Objects

Translucent objects are characterized by diffuse light scattering beneath the object's surface. Light enters and leaves an object at possibly distinct surface locations. This paper presents the first method to acquire this transport behavior for arbitrary inhomogeneous objects. Individual surface points are illuminated in our DISCO measurement facility and the object's impulse response is recorded with a high-dynamic range video camera. The acquired data is resampled into a hierarchical model of the object's light scattering properties. Missing values are consistently interpolated resulting in measurement-based, complete and accurate representations of real translucent objects which can be rendered with various algorithms.

DISCO - Acquisition of Translucent Objects
Michael Goesele, Hendrik P. A. Lensch, Jochen Lang, Christian Fuchs and Hans-Peter Seidel
In: ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2004), Los Angeles, USA, August 8-12, ACM, New York, 2004.

More information ...

Starfruit rendered with acquired BSSRDF  

Validation of Color Managed 3D Appearance Acquisition

Image-based appearance acquisition algorithms are able to generate realistic 3D models of real objects but have previously not taken care of calibrated color space. We integrate a color managed high-dynamic range imaging technique into a recent appearance acquisition algorithm and generate models in CIE XYZ color space. We compare the final models with spectrophotometric measurements and compute difference images between renderings and ground truth images. Displayed renderings and printouts are compared to the original objects under identical illumination conditions to evaluate and validate the complete appearance reproduction pipeline. Working in CIE XYZ color space allows for expressing the perceivable differences in a standardized measure.

Validation of Color Managed 3D Appearance Acquisition
Michael Goesele, Hendrik P. A. Lensch, and Hans-Peter Seidel
In:Proceedings of the 12th IS&T Color Imaging Conference, Scottsdale, Arizona, USA, November 9-12, 2004.

Real and synthetic object under identical illumination  

Accurate Light Source Acquisition and Rendering

Realistic image synthesis requires both complex and realistic models of real-world light sources and efficient rendering algorithms to deal with them. In this paper, we describe a processing pipeline for dealing with complex light sources from acquisition to global illumination rendering. We carefully design optical filters to guarantee high precision measurements of real-world light sources. We discuss two practically feasible setups that allow us to measure light sources with different characteristics. Finally, we introduce an efficient importance-driven photon emission algorithm for our representation that can be used, for example, in conjunction with Photon Maps.

Accurate Light Source Acquisition and Rendering
Michael Goesele, Xavier Granier, Wolfgang Heidrich and Hans-Peter Seidel
In: ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2003), San Diego, USA, July 27-31, ACM, New York, 2003, 621-630.

Interactive Visualization of Complex Real-World Light Sources
Xavier Granier, Michael Goesele, Wolfgang Heidrich and Hans-Peter Seidel
In: Proceedings of Pacific Graphics 2003, Canmore, Canada, October 8-10, IEEE Computer Society Press, 2003, 8p.

More information ...

scene illuminated by an acquired light source

Image-Based Reconstruction of Spatially Varying Materials

The use of realistic models for all components of images synthesis is a fundamental prerequisite for photorealistic rendering. The generation of these models in a manual process often becomes infeasible as the demand for visual complexity increases steadily. We concentrate on the acquisition of realistic materials. In particular, we describe an acquisition method for shift-variant BRDFs, i.e., a specific BRDF for each surface point.

Image-Based Reconstruction of Spatial Appearance and Geometric Detail
Hendrik P. A. Lensch, Jan Kautz, Michael Goesele, Wolfgang Heidrich and Hans-Peter Seidel
ACM Transactions on Graphics 22, 2, pages 234-257, 2003

More information ...

bust rendered with spatially varying BRDFs

Accuracy of 3D Range Scanners by Measurement of the Slanted Edge Modulation Transfer Function

We estimate the accuracy of a 3D range scanner in terms of its spatial frequency response. We determine a scanner's modulation transfer function (MTF) in order to measure its frequency response. A slanted edge is scanned from which we derive a superresolution edge profile. Its Fourier transform is compared to the Fourier transform of an ideal edge in order to determine the MTF of the device. This allows us to determine how well small details can be acquired by the 3D scanner. We report the results of several measurements with two scanners under various conditions.

Accuracy of 3D Range Scanners by Measurement of the Slanted Edge Modulation Transfer Function
Michael Goesele, Christian Fuchs and Hans-Peter Seidel
In: 4th International Conference on 3-D Digital Imaging and Modeling, Banff, Canada, October 6-10, IEEE Computer Society, Los Alamitos, 2003, 8 p.

scanned edge used for MTF calculation

Interactive Rendering of Translucent Objects

We present a rendering method for translucent objects, in which view point and illumination can be modified at interactive rates. In a preprocessing step the impulse response to incoming light impinging at each surface point is computed and stored in two different ways: The local effect on close-by surface points is modeled as a per-texel filter kernel that is applied to a texture map representing the incident illumination. The global response (i.e. light shining through the object) is stored as vertex-to-vertex throughput factors for the triangle mesh of the object. During rendering, the illumination map for the object is computed according to the current lighting situation and then filtered by the precomputed kernels. The illumination map is also used to derive the incident illumination on the vertices which is distributed via the vertex-to-vertex throughput factors to the other vertices. The final image is obtained by combining the local and global response. We demonstrate the performance of our method for several models.

Interactive Rendering of Translucent Objects
Hendrik P. A. Lensch, Michael Goesele, Philippe Bekaert, Jan Kautz, Marcus A. Magnor, Jochen Lang and Hans-Peter Seidel
In: Proceedings of Pacific Graphics 2002, Beijing, China, October, 9-11, IEEE Computer Society, Los Alamitos, 2002, 214-224

translucent horse rendered interactively
Contact Information | © 2005-2013 Michael Goesele / TU Darmstadt