photo by Andreas Psaltis © 2019 ACM SIGGRAPH
On Tuesday, SIGGRAPH 2019’s Real-Time Live! took over West Hall B of the Los Angeles Convention Center. The show kicked off the evening of 30 July with a warm — and “punny” — welcome from Real-Time Live! Chair Gracie Arenas Strittmatter. Strittmatter made attendees feel at home by welcoming them with a friendly Texas “y’all.”
Strittmatter set the stage for Real-Time Live! by sharing a retrospective of its history (below) and stating, “This is a show where anything can happen as creators give live demos of the leading-edge techniques they use to create stunning, interactive experiences.”
Then, a select group of industry thought-leaders demonstrated innovations chosen by the 2019 jury that champion real-time technology. Here, we share a brief overview of each of the 10 presentations from the SIGGRAPH 2019 Real-Time Live! session.
Quixel’s Rebirth: Megascans Environment Breakdown
Galen Davis (Quixel) broke down the tools, tips, and techniques used in the Quixel rebirth. The presentation had the goal of showing how easy it is to create a realistic environment within minutes. To show the ease with which users can navigate Quixel Mixer, Davis demonstrated the creation of Icelandic sand. He presented ways to bring in details, like how to blend from below and make the scene look natural. Bonus: Quixel Mixer is free to download now!
Real-Time, Single Camera, Digital Human Development
The Digital Domain team demonstrated their state-of-the-art, real-time digital human technology. They created characters with a comprehensive digital data set of the human face, giving viewers a VR look at the unreal world through a character’s perspective. “The computer needs to learn how the face works,” the team said. After it is trained, it runs “wicked fast.” And once the data is captured, the neural network can be trained in the time it takes to get from “Santa Monica to Pasadena on a Thursday evening.” The neural network knows everything — even blood flow!
GauGAN: Semantic Image Synthesis With Spatially Adaptive Normalization
Taesung Park (University of California Berkeley), Chris Hebert (NVIDIA), and Gavriil Klimov (NVIDIA) presented “GauGAN,” a smart-paintbrush technology that generates a realistic image in real time. In just a matter of brushstrokes, this technology creates photorealistic images. The “GauGAN” team recognizes that, as an artist, it’s important to be able to quickly generate content. Even an amateur artist can create something life-like!
Project Nira: Instant Interactive Real-Time Access to Multi-Gigabyte Sized 3D Assets on Any Device
Arash Kaissami and Andrew Johnson (dRaster, Inc., Nira.app) presented Project Nira, technology that allows users to view, navigate, and collaborate on extremely large 3D models in real time on any device, including low-powered mobile phones. Artists can collaborate remotely without having to send large files back and forth. The projects can load within five seconds, and Project Nira renders at 60 frames per second (fps) in a web browser. This platform is so much better than using screenshots!
Level Ex: Marching All Kinds of Rays…On Mobile
Sam Glassenberg, Matthew “Thew” Yeager, and Andy Saia are part of the team at Level Ex — they make video games for doctors. They demonstrated high-end ray tracing hardware that captures medicine on mobile devices and in VR. It all runs on a mobile phone, plays under x-ray, and is built entirely through ray tracing. At the end of the demo, the team teased a major partnership announcement from Level Ex, coming next week. Stay tuned!
Causing Chaos: Physics and Destruction in Unreal Engine
Jim Van Allen, Matthias Worch, and Jeff Faris (Epic Games, Inc.) presented a playable live demo of Chaos — Unreal Engine’s new high-performance physics and destruction system. With Chaos, developers can achieve cinematic-quality visuals in real time. Through clustering, the results of fractures are combined into increasingly larger chunks. The field system allows the user to control everything. The presenters announced that all of this will soon be launched for free.
VR Hair Salon for Avatars
Who said avatars can’t get some pampering? Hao Li (Pinscreen, USC/ICT) and Liwen Hu (Pinscreen) demonstrated a VR system that creates digital and convincing hair models from images. Even novices can create rich hairstyles including buns, braids, eyebrows, and beards.
“Reality vs. Illusion” Real-Time Ray Tracing
Natalie Burke and Arisa Scott (Unity Technologies) displayed a seamless transition between a real car and a CG car. They utilized the benefits of ray tracing by demonstrating the beautiful dynamic shadows, lighting, and seclusion it brings to a work of art. This rendering was done in Unity editor.
Spooky Action at a Distance: Real-Time VR Interaction for Non-Real-Time Remote Robotics
Pavel Savkin and Nathan Quinn (SE4 Inc) demonstrated how VR can assist traditional remote robotics. The robot has the ability to remember and repeat actions from the queue, and it can properly manage safety by allowing the user to delete and toss an action into a black hole within the VR interaction.
Real-Time Procedural VFX Characters in Unity’s Real-Time Short Film “The Heretic”
Adrian Lazar (Unity Technologies) demonstrated procedural creation and manipulation of real-time characters in the short film “The Heretic.” This process was created using a set of tools and shaders inside Unity. Because hair is a trending topic in Real-Time Live!, Lazar touched on the character of Morgan. Morgan posed some interesting challenges, as the character does not have a clearly defined physical representation and needs to integrate into the environment in a natural way.
At the end of the show, six jurors voted for the 2019 Best in Show and the audience voted live for an all-new Audience Choice award. Congratulations go to the “GauGAN” team, who received both the Best in Show and Audience Choice awards! Watch the full livestream.