Google’s New Volumetric Capture System: Realistic Character Lighting for Any Environment

Relightables Volumetric Capture Technology

Computer scientists at Google are revolutionizing this area of volumetric capture technology with a novel, comprehensive system that is able, for the first time, to capture full-body reflectance of 3D human performances, and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Credit: SIGGRAPH Asia

Even novice photographers and videographers who rely on their handheld devices to snap photos or make videos often consider their subject’s lighting. Lighting is critical in filmmaking, gaming, and virtual/augmented reality environments and can make or break the quality of a scene and the actors and performers in it. Replicating realistic character lighting has remained a difficult challenge in computer graphics and computer vision.

While significant progress has been made on volumetric capture systems, focusing on 3D geometric reconstruction with high-resolution textures, such as methods to achieve realistic shapes and textures of the human face, much less work has been done to recover photometric properties needed for relighting characters. Results from such systems lack fine details and the subject’s shading is prebaked into the texture.

Computer scientists at Google are revolutionizing this area of volumetric capture technology with a novel, comprehensive system that is able, for the first time, to capture full-body reflectance of 3D human performances, and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Google will present their new system, called The Relightables, at ACM SIGGRAPH Asia, held November 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

There have been major advances in this realm of work that the industry calls 3D capture systems. Through these sophisticated systems, viewers have been able to experience digital characters coming to life on the big screen, for instance, in blockbusters such as Avatar and the Avengers series and much more.

Indeed, the volumetric capture technology has reached a high level of quality, but many of these reconstructions still lack true photorealism. In particular, despite these systems using high-end studio setups with green screens, they still struggle to capture high-frequency details of humans and they only recover a fixed illumination condition. This makes these volumetric capture systems unsuitable for photorealistic rendering of actors or performers in arbitrary scenes under different lighting conditions.

Google’s Relightables system makes it possible to customize lighting on characters in real time or re-light them in any given scene or environment.

They demonstrate this on subjects that are recorded inside a custom geodesic sphere outfitted with 331 custom color LED lights (also called a Light Stage capture system), an array of high-resolution cameras, and a set of custom high-resolution depth sensors. The Relightables system captures about 65 GB per second of raw data from nearly 100 cameras and its computational framework enables processing the data effectively at this scale. A video demonstration of the project can be seen here:

Their system captures the reflectance information on a person — the way lighting interacts with skin is a major factor in how realistic digital people appear. Previous attempts used either flat lighting or required computer-generated characters. Not only are they able to capture reflectance information on a person, but they are able to record while the person is moving freely within the volume. As a result, they are able to relight their animation in arbitrary environments.

Historically, cameras record people from a single viewpoint and lighting condition. This new system, note the researchers, allows users to record someone then view them from any viewpoint and lighting condition, removing the need for a green screen to create special effects and allowing for more flexible lighting conditions.

The interactions of space, light, and shadow between a performer and their environment play a critical role in creating a sense of presence. Beyond just ‘cutting-and-pasting’ a 3D video capture, the system gives the ability to record someone and then seamlessly place them into new environments — whether in their own space for AR experiences — or in the world of a VR, film, or game experience.

At SIGGRAPH Asia, The Relightables team will present the components of their system, from capture to processing to display, with video demos of each stage. They will walk attendees through the ins and outs of building The Relightables, describing the major challenges they tackled in the work and showcasing some cool applications and renderings.

Reference: “The relightables: volumetric performance capture of humans with realistic relighting” by Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escolano, Rohit Pandey, Jason Dourgarian, Danhang Tang, Anastasia Tkach, Adarsh Kowdle, Emily Cooper, Mingsong Dou, Sean Fanello, Graham Fyffe, Christopher Rhemann, Jonathan Taylor, Paul Debevec and Shahram Izadi, 8 November 2019, ACM Transactions on Graphics.
DOI: 10.1145/3355089.3356571

Be the first to comment on "Google’s New Volumetric Capture System: Realistic Character Lighting for Any Environment"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.