Image Credit: Sunday Morning by NCSU MADTech student Laura Reeve, 2024

At North Carolina State University’s MADTech program, students are redefining digital performance by building real-time animation pipelines centered on motion capture as a performing arts practice. Students brought 3D self-portraits to life as expressive digital doubles — emphasizing improvisation, embodiment, and cinematic storytelling over traditional animation workflows. We sat down with Topher Maraffi to discuss his SIGGRAPH 2024 Educator’s Forum project, “Metahuman Theatre: Teaching Photogrammetry and Mocap as a Performing Arts Process”.

SIGGRAPH: How does using motion capture as a performing arts process change the way students engage with animation and storytelling?

Topher Maraffi (TM): By having the students act out their scene idea, they are confronted by physical realities such as how their bodies move in space, which quickly gives them a unique characterization based on spatial performance data. Since they can’t script everything, this process affords moments of embodied improvisation that they didn’t think of in pre-production planning, leading to acting discoveries that add personality to the character animation through physical ticks and unique blocking choices that enhance the narrative. They can then iteratively refine that mocap base by additively setting keyframes to go beyond reality into hyper-real or impossible character movements, but it helps students to learn believable actions and timing by starting with tangible data from a live performance.

SIGGRAPH: What were the key challenges or surprises students encountered while creating and performing their own Metahuman digital doubles?

TM: Some of the students were intimidated by the prospect of performing but were surprised to find out that it is a lot of fun to act out their concepts. A few discovered that they have a talent for physical acting.

From a technical standpoint, students are surprised and challenged to find out that the real-time animation process only begins with the motion capture session, and that there can be a lot of time-consuming labor when fixing mocap data from a poor capture. Students learn that the technical animation craft of sensor placement and calibration directly impacts the speed and quality of the character animation process, and taking the time to carefully do all the steps in the capture session saves time in post-production editing.

SIGGRAPH: Can you share more about how you adapted traditional animation principles, like Disney’s 12 Principles, to performance capture workflows? What impact did that have on the final animations?

TM: Disney’s 12 Principles are an industry standard for animators and have been proven to create expressive characters. But Disney artists were studying physical performers like Charlie Chaplin when they developed their principles in the early 20th century, so it made aesthetic sense to spiral them back to performance capture, especially when the end result is character animation.

The main point I teach students is that they are performing their characters through layers of technology, like masked theatre or puppetry, and that Disney principles like exaggeration and anticipation are required to project their expressiveness to the final character performance. What feels like unnatural movements to students tend to look more natural on animated characters and can push the characterization through the uncanny valley to an expressive performance.

Much of what I do is trying to get students to go beyond their everyday movements, what performing arts practitioner Eugenio Barba called “extra-daily” technique, which is consistent with how expressive mocap actors like Andy Serkis move their bodies.

SIGGRAPH: How do you see this approach to teaching real-time animation and performance capture evolving in the future? What possibilities does it open for students as creators?

TM: We are starting to explore generative AI tools with real-time performance capture in a virtual production volume, using software like Radical Motion suit-less AI software with a Livelink connection in Unreal Engine. We recently used this real-time mocap pipeline in our Virtual Production Lab to puppeteer a talking octopus character in a collaborative shoot with professional VFX studio Alecardo. These subscription-based AI tools are much more accessible for students, because the cost is much lower than suits and they work with any body type using a common webcam.

One of the bottlenecks of teaching motion capture in the classroom is the time it takes for students to suit up and troubleshoot the fit of the sensors for different body sizes. While the quality is not yet as good as inertial or optical suits, the benefit of quickly roughing out a scene allows more creative exploration of the performance space. Then if we need higher quality, we can do final captures with our Noitom or Rokoko inertial suits and MetaHuman Animator. This will allow us to explore more performing arts techniques like devised theatre improvisation for developing original narratives using motion capture early in the production pipeline.

As we continue exploring new frontiers in digital performance, projects like this showcase how emerging tools can transform both the classroom and the stage. Don’t miss what’s next — register now for SIGGRAPH 2025 and experience the future of animation, performance, and immersive media.


Topher Maraffi is an Assistant Professor at North Carolina State University in Media Arts, Design and Technology and is the lead MADTech research faculty in the College of Design Virtual Production Lab. He has taught animation and game design for almost 30 years and attended his first SIGGRAPH in New Orleans (1996). At SIGGRAPH 2025, he is the room lead for the Immersive Pavilion.

Related Posts