Image Credit: photo by Andreas Psaltis © 2023 ACM SIGGRAPH
On Tuesday, 8 August, SIGGRAPH 2023 Real-Time Live! Chair Cem Yuksel moderated an extra-special presentation of Real-Time Live! for the 50th conference celebration. This year, Real-Time Live! was hosted in person in Los Angeles, and for the first time ever, it was livestreamed on Twitch. SIGGRAPH 2023 showcased 13 live demos, and each one stole the show with its transformative technology. Before you read a recap of each real-time demo, check out the Real-Time Live! trailer for a glimpse at the projects presented at the 50th conference celebration, and catch the livestream playback on Virtual Access.
ChatAvatar: Creating Hyper-realistic Physically Based 3D Facial Assets Through AI-driven Conversations
This demo showcases game-changing technology that offers new horizons for immersive virtual experiences by enabling the creation of hyper-realistic 3D facial assets with PBR textures through AI-driven conversations. Viewers witnessed the avatar transform into famous people in history including Albert Einstein, J. Robert Oppenheimer, and Mark Zuckerberg.
Feature Film Quality Characters Anywhere
Take a magic carpet ride with Ozone Story Tech as they present an innovative software technology that enables the authoring and real-time performance of 3D deformations, poses, and animations. The presenters were able to change the resolution in the model on the fly — providing perfectly smooth results.
Interactive AI Material Generation and Editing in NVIDIA Omniverse
In need of a home makeover? Step inside the NVIDIA Omniverse as the creators use an end-to-end tool with optimized AI models tightly integrated into artists’ interactive loop for PBR material generation (from text or image), iterative refinement (e.g., adjust style), and editing (e.g., local inpainting). Plus, this project took home the coveted Best in Show Award.
Intermediated Reality With an AI 3D Printed Character
AI has gone rogue! This presentation showcased an AR live AI character interaction demonstration, utilizing a photorealistic intermediated reality technique. By processing live speech recognition, the AI generated a short response in character while animating the character’s face in synchronization with the generated audio.
Metaphysic Live: Real-time Hyperreal Faceswap
The audience at Real-Time Live! got to witness a live faceswap from Metaphysic. This demo empowers creators to synthesize photorealistic faces and integrate them into the input camera feed with minimal latency, making it perfect for live, on-stage performances.
Napkinmatic AppMode 3D Mesh and Texture Designer
Audience members had their mind blown when two creators presented their spatial computing 3D authoring tool on a napkin. This demo showcases a robust spatial computing platform connecting the real world to AI in a model and endpoint-agnostic way. The napkin sketch transforms a 3D model into concept ideas and AI texturing buttons — the sky’s the limit!
Real-time Collision Using AI
This demo made a “splash” at Real-Time Live! This new method for collision detection of general non-convex shapes uses AI to decompose the shapes into parts tightly approximated using quadric inequalities and solving the collisions (deepest point, normal, and intersection polygon).
Real-time Stage Modelling and Visual Effects for Live Performances
The crowd went wild for this interactive demo where the creators showcased real-time 3D modeling, rendering, blending of assets, and interactivity between real and virtual performers. They demonstrated their platform’s capabilities with a mixed reality performance featuring virtual and real actors engaged with in-person audiences — including launching Stanford Bunnies into the audience. This project earned the fan-favorite Audience Choice Award.
Roblox Generative AI in Action
Open the door to a world of possibilities with this exciting demo. By leveraging natural language and other expressions of intent, the creators can build interactive objects and scenes without complex modeling or coding. They aimed to make creation faster and easier by using AI image generation services and LLMs.
Sonification of a Juggling Performance Using Spatial Audio
The CG circus has entered the stage! The creators juggled live to showcase their new musical instrument based on juggling patterns captured in real time. The movement and sound creates a feedback loop and synesthesia experience — but be careful not to drop a ball!
Suit Up: AI MoCap
Let’s get physical at Real-Time Live! This demo is a real-time, marker-based, motion capture method built upon fast data-driven models, offerings cost-effective, flexible, and portable solutions. The creators showcased its capabilities live by doing squats, pushups, and more.
Super Fast Strand-based Hair Rendering With Hair Meshes
Let your hair flow through the LA breeze with this novel demo. This strand-based rendering technique has an unprecedented level of rendering performance. And what better way to top this presentation off than putting full-resolution hair on the Utah Teapot!
Tell Me, Inge… An Interactive Interview With a Holocaust Survivor
Hear firsthand stories from a Holocaust survivor using video AI and streaming WebXR VR content. The audience was immersed in Inge Auerbacher’s memories as a child survivor of the Theresienstadt ghetto.
Congratulations to the Best in Show and Audience Choice award winners! If you missed the most anticipated event of the year or want to rewatch this spectacular showcase, Full Conference and Virtual Access participants can watch Real-Time Live! again on the SIGGRAPH 2023 website. Want in on the fun? You can still register to access SIGGRAPH 2023 virtually through 9 September.