Image credit: Top Left: Zeshun Zong, Xuan Li, Minchen Li, Maurizio M. Chiaramonte, Wojciech Matusik, Eitan Grinspun, Kevin Carlberg, Chenfanfu Jiang, and Peter Yichen Chen. 2023. Neural Stress Fields for Reduced-order Elastoplasticity and Fracture. ACM Transactions on Graphics (2023). Middle Top: Otman Benchekroun, Jiayi Eris Zhang, Siddhartha Chaudhuri, and Alec Jacobson, Eitan Grinspun, Yi Zhou. 2023. Fast Complementary Dynamics via Skinning Eigenmodes. ACM Transactions on Graphics (2023). Right: Vismay Modi, Nicholas Sharp, Or Perel, Shinjiro Sueda, and David I.W. Levin. 2024. Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation. ACM Transactions on Graphics (2024). Bottom Left: Ty Trusty, Otman Benchekroun, Eitan Grinspun, Danny M. Kaufman, and David I.W. Levin. 2023. Subspace Mixed Finite Elements for Real-Time Heterogeneous Elastodynamics. In SIGGRAPH Asia 2023 Conference Papers. Bottom Middle: Yue Chang, Peter Yichen Chen, Zhecheng Wang, Maurizio M. Chiaramonte, Kevin Carlberg, and Eitan Grinspun. 2023. LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields. SIGGRAPH Asia 2023 Conference Papers (2023).
At SIGGRAPH 2025, innovation and tradition in computer graphics and interactive techniques collided. In the Technical Workshop “Reduced-Order Modeling for Physical Simulation: From the Classical to the Neural”, Researcher David Levin explored the evolution from classical methods to neural approaches, the challenges of high-dimensional simulations, and their impact across graphics, VFX, AR/VR, and interactive applications.
SIGGRAPH: Tell us about your work in reduced-order modeling for physical simulation. What motivated you to examine the evolution from classical engineering formulations to neural methods?
David Levin (DL): Reduced-order modeling has been a powerful tool in graphics and engineering for a long time. It has its genesis in the mid-’60s, but it always seems to pop up again as the size, complexity, and scale of physics problems outpace available hardware. My particular interest in reduced-order modeling is motivated by (1) trying to make it easy and fast to apply to a wider range of complicated geometries and (2) extending it to work with more involved nonlinear physical systems. For instance, in one previous paper, our group simulated an entire mammoth, skeleton and all, in real time.
Because modal analysis has existed for so long (I often feel my grad students view anything published before 2000 as prehistory), it’s easy to lose track of what’s been done before and how successful those old approaches can be on modern problems with modern hardware. Especially now, when computer science is moving so fast, it’s important to look backward so that when we advance, we make sure we are looking for new meaningful theoretical and technical developments rather than just repeating ourselves.
SIGGRAPH: Physical systems can be incredibly complex. What are the main challenges when simplifying high-dimensional simulations into compact and efficient reduced models, and how does your approach address them?
DL: It’s hard to really call anything “my approach” in such a well-explored field, and I think the groundwork for all this was laid long before I got here. In general, the main challenges with any model-reduction approach are deciding what you will give up. By definition, the model is reduced by actively removing its ability to produce “uncommon” motions or outputs. How you define “uncommon,” whether that’s from analysis or learned from data, is the first important design decision.
Secondly, under the hood of all numerical simulation codes, there are a ton of operations that accumulate information over the whole geometry of the simulated object. Making these operations fast by having them only operate on an optimal subset of whatever you are simulating is really the hidden key to making execution of the whole method fast, and there are many approaches to algorithmically finding those subsets.
In my research group, we really care about objects that are heterogeneous, meaning they are made up of many different materials. Think of a chair with a soft cushion but very stiff and strong legs and arms. So we focus on constructing reduced models that take into account not just the geometry but the effects of the material the object is built out of as well.
SIGGRAPH: Your workshop explores both traditional techniques and emerging neural model reduction strategies. How do these methodologies complement each other, and why is it important for the field to bridge the two?
DL: There’s definitely a flavor of “everything old is new again,” which I enjoy. The basic reduced-order algorithm in graphics for simulating complex deforming objects hasn’t really changed since Barbič and James’ landmark work Real-Time Subspace Integration for St. Venant–Kirchhoff Deformable Models. What’s been interesting is seeing the infusion of machine learning and neural methods into these basic frameworks and seeing how they can enrich the more classical approaches.
For instance, neural methods have relaxed requirements for reduced-order modeling so that any geometry you can render, you can now build a ROM out of. Work out of UofT (OK, I work there, so I’m biased) shows how you can train a neural network to make a single ROM for multiple shapes or objects that are being cut (Chang et al., LiCROM…), and other work shows how using neural networks lets you efficiently reduce new phases in existing simulation algorithms like the Material Point Method (Zong et al., Neural stress fields…). And this is all just scratching the surface of relaxing the limitations and extending the capabilities of older ROM approaches.
SIGGRAPH: Did the workshop reveal any unexpected findings or novel perspectives — either in accuracy, scalability, or generalization — when applying neural approaches to simulation problems?
DL: One wonderful thing about workshops is that they let you reach a potentially broader audience than just the standard Technical Papers program and explore some deeper connections. For instance, we had some attendees from industry, and their interest in ROMs is often driven not just by performance considerations but by memory considerations — they want to run physics simulations on phones, VR headsets, smart glasses, and all kinds of very constrained hardware. Here, neural approaches can be extremely useful because they act as a kind of compression for the ROM itself. I’d never really thought about those applications before, so for me, this was eye-opening.
SIGGRAPH: Reduced-order modeling has implications across graphics, VFX, AR/VR, and interactive applications. How do you envision these techniques shaping real-time simulation or production workflows in the coming years?
DL: I’d be surprised if ROMs aren’t already doing that, but I do think new advances will open more applications in fields like robotics (where being able to run as many simulations in parallel as possible is of paramount importance) and in the VR/AR space. Not to mention outside visual effects in general. For instance, ROMs are a key component in sound simulation, which is incredibly important for immersive virtual worlds. As we push the scale and efficiency of these methods, I imagine all these downstream applications will benefit.
SIGGRAPH: What guidance would you give to computer graphics professionals who are interested in submitting a SIGGRAPH Technical Workshop or attending one at SIGGRAPH 2026?
DL: You should do it! Workshops are a great way to explore content in a way that doesn’t fit the standard technical papers mold. In our case, myself and my co-organizers Peter Chen (UBC) and Eitan Grinspun (UofT) wanted to share both curated cutting-edge machine learning work and connect that work to previous algorithms and more standard mesh-based methods. The Technical Papers program focuses on the cutting edge, and the sheer volume of new stuff can make it all hard to absorb. Workshops provide a place to build important connections and help focus the flow of information a bit, rather than drinking directly from the firehose.
It’s also a great way to bridge communities, invite great keynote speakers (we were lucky to have Maurizio Chiaramonte from Meta, Jernej Barbič from USC, and Mengyu Chu from Peking University), have panel discussions, and really create the sort of research discussion that you feel will best benefit your community. And the program is so flexible that I’m hoping to see more workshops on tech policy or social issues in the future. So yes, do it — submit to the workshop program or just go to workshops. That’s my advice.
Feeling inspired? Submit your SIGGRAPH 2026 Technical Workshop and share your ideas with the forefront of computer graphics and interactive techniques.

David Levin is an Associate Professor in the Department of Computer Science at the University of Toronto and Principal Research Scientist at NVIDIA where he works on algorithms for physics-based animation, mechanical design and robotics. He is a former Canada Research Chair, the winner of the CHCCS Early Career Research Award, an Ontario Early Career Researcher Award and has published over 40 papers at ACM SIGGRAPH and SIGGRAPH Asia, the top venues for computer graphics research.



