Infinite Shapes, Infinite Possibilities

by | 15 October 2025 | Conferences, Research

Image credit: Chang et. al, Shape Space Spectra (SIGGRAPH 2025 / ACM TOG)

“Shape Space Spectra”, a standout SIGGRAPH 2025 Technical Paper, is changing how we think about eigenanalysis in shape modeling. Instead of being limited to a single shape or discretization, this method computes eigenfunctions across entire families of shapes, unlocking new possibilities in sound synthesis, animation, and elastodynamic simulation. We wanted to get the inside scoop from the contributors — what inspired this research, the challenges they faced, and where they see it heading next.

SIGGRAPH: Share some background about “Shape Space Spectra”. What inspired your team to explore eigenanalysis across continuously parameterized shape families?

Yue Chang (YC): In 2023, we published our SIGGRAPH paper “LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields”, which introduced a method to represent basis functions for deformable simulations as neural fields. This approach allowed the basis to generalize across shapes with similar geometries.

However, we became intrigued by the question of how to extend this generalization to entirely different shapes. To achieve this, we explored conditioning the basis functions on shape parameters. Since eigenanalysis plays a central role in computing these bases, our investigation naturally evolved into studying eigenanalysis across continuously parameterized shape families — where the outputs of our neural representation are explicitly conditioned on the underlying shape parameters.

SIGGRAPH: Your method represents eigenfunctions as neural fields over shape space. What were the main challenges in developing this approach, and how did you address them?

YC: The main challenge was understanding how and why mode changes occur across the shape space. When testing our method on a simple shape family — rectangles with varying width and height — we observed that the first eigenfunction’s mode appeared to change abruptly around the square shape. Initially, this led us to suspect that it might be fundamentally impossible to represent eigenfunctions as continuously varying functions over shape space.

However, during our discussions, Eitan proposed an alternative explanation: The apparent mode changes might not reflect discontinuities in the eigenfunctions themselves, but rather eigenvalue crossings — where a higher eigenvalue mode becomes lower (or vice versa) as the shape changes. To verify this, we plotted the eigenvalues as functions of the shape parameters. This visualization made everything clear: The mode changes we observed were directly tied to eigenvalue order swaps occurring across the shape space.

SIGGRAPH: The paper discusses eigenvalue dominance swaps at points of multiplicity and the use of dynamic reordering during optimization. Could you explain how this works and why it’s important?

YC: As we discovered, in continuously parameterized shape spaces there isn’t always a single, consistent ordering of eigenvalues that holds across all shapes. In some regions of the space, two eigenvalues can become very close or even swap their order — a phenomenon known as eigenvalue crossings or dominance swaps.

This creates a problem for the traditional variational formulation of eigenanalysis, which assumes a fixed ordering of eigenvalues (for example, “the first eigenfunction corresponds to the smallest eigenvalue,” and so on). When eigenvalues switch order, that assumption breaks down — leading to discontinuities in the eigenfunctions as the system “jumps” from tracking one mode to another.

To address this, we introduced dynamic reordering during optimization. Instead of fixing the order of eigenfunctions, we continuously monitor and reorder them based on their current eigenvalues. In other words, if an eigenfunction’s eigenvalue becomes smaller, it moves up in the ordering and vice versa. This flexible sorting removes the rigid constraint of a global ordering, allowing the learned eigenfunctions to remain continuous and smooth across the entire shape space.

SIGGRAPH: What were some of the most exciting or unexpected findings you discovered while evaluating your method on applications like sound synthesis, locomotion, or elastodynamic simulation?

YC: The most exciting outcome was the ability of our method to generalize basis functions across hundreds of shapes from different categories for reduced-space elastodynamic simulation — which was also the original goal of this project. It was fascinating to see how the learned representation could capture dynamic behaviors across such a wide variety of shapes that looks completely different.

SIGGRAPH: How do you envision “Shape Space Spectra” being applied in practical or creative workflows in graphics, simulation, or design?

YC: One of the most exciting aspects of “Shape Space Spectra” is that the basis functions are differentiable with respect to the shape parameters. This opens up a range of possibilities for practical and creative workflows in graphics and simulation. For example, it allows gradients to flow directly through the eigenanalysis process, enabling optimization and co-design of both shape and dynamics.

In simulation, this means users could fine-tune a shape to achieve desired vibration or deformation behaviors in a physically consistent way. In design and animation, artists could interactively explore how subtle geometric changes affect motion, sound, or material response — without having to recompute everything from scratch.

We envision this differentiable formulation as a bridge between geometry, physics, and design, allowing continuous, gradient-based control over complex dynamic phenomena across diverse shape spaces.

SIGGRAPH: What advice would you give to researchers who are considering submitting to SIGGRAPH Technical Papers in the future?

YC: I think a fun way to view our project is that traditional eigenanalysis represents just a single point within a larger “shape space.” From this perspective, our work adds new axes to that traditional framework — essentially expanding the space in which we can explore. By generalizing along these new dimensions, we discovered some surprising and enjoyable results.

One takeaway from this experience is that it’s often valuable to think about how a traditional method might be extended by introducing new variables or perspectives. Expanding the method along an additional axis can lead to entirely new formulations and unexpected insights.

SIGGRAPH 2026 is coming to Los Angeles! Join us 19–23 July at the Los Angeles Convention Center, and be sure to bookmark the event website so you don’t miss any updates.


See how "Shape Space Spectra" lets eigenfunctions flow across entire shape families, unlocking new possibilities in animation, sound, and simulation.

Yue Chang is a third-year PhD student in Computer Science at the University of Toronto, advised by Eitan Grinspun. Her research focuses on computer graphics, with an emphasis on exploring shape spaces and their underlying structures. She develops methods to capture fundamental properties — such as eigenfunctions and discontinuities — across shape families, enabling new applications in simulation and design.

Related Posts