Capturing and Rendering Real-World Environments

Daniel Aliaga
Host: Jonathan Cohen

Computer simulation of real-world environments is one of the grand challenges of computer graphics. Applications for this technology include remote education, virtual heritage, specialist training, electronic commerce, and entertainment. Unfortunately, current computer graphics techniques fall far short of providing solutions for this challenge. In this presentation, I present a new approach to capturing and rendering large real-world environments, including several successful demonstrations and future research plans.

My ultimate goal is to allow an untrained operator to walk into a city or building (e.g., a museum) and wave around some device that captures a digital model, which later can be used to provide many people the realistic visual experience of “walking” through the environment interactively. My approach is to take advantage of recent technology trends and to obtain a dense and automatic sampling of a large viewpoint space with omnidirectional images. This strategy replaces the difficult computer vision problems of 3D reconstruction and surface reflectance modeling with the easier problems of motorized cart navigation, data compression, and working set management.

I also provide a summary of related approaches and a research plan to capture and reconstruct large environments. This plan benefits from collaborative efforts in robotics (for building self-navigating high-resolution capture devices), computer vision (for developing image-reconstruction algorithms), and systems (for building and deploying large software systems over a network) and from developing applications to foment interactive tourism, to preserve historical sites, and to assist with simulation and training scenarios.