Mastering Motion Magic in VR

by | 30 May 2024 | Conferences, Virtual Reality

Image credit: David Ledo, Autodesk Research

SIGGRAPH sat down with the creators behind the SIGGRAPH 2024 Immersive Pavilion selection, “Reframe: Recording and Editing Character Motion in Virtual Reality,” to learn more about their inspiration behind this immersive experience, the tools used to create their project, and what they hope participants take away from the experience this July in Denver.

SIGGRAPH: Share an overview of “Reframe: Recording and Editing Character Motion in Virtual Reality.” What inspired its development?

Fraser Anderson (FA): In recent years we have seen a number of tools emerge that enable instant creation of content, especially when applied to the making of video. These tools have been created with the intent of being accessible to the masses with the ability to create playful experiences.

Our team wanted to bring that level of excitement from these emerging tools into the world of animation. This meant moving away from requiring people to be on sets working with specialized equipment and professional-grade 3D animation tools. Virtual reality (VR) and the latest tracking technology brought by the headsets provided us a way to directly manipulate content and enable recording. From there the question was how to enable the editing of character motion. Here, we drew inspiration from old 2D animation approaches where animators would draw a set of poses and represent them in time to communicate how a character might move. We adapted this by situating a timeline in space and superimposing a set of computer “key poses” where the movement is most pronounced.

Now with VR, we can physically manipulate these poses, and the changes get reflected along all the in-betweens. To show how these different body parts change in time and simplify how we represent it, we added a line to show the trajectory of how the different joints change as a function of time. All of this enables a process very akin to puppetry, that feels intuitive, playful, and enables creators to both author and do a quick pass to edit and fine-tune the movements.

Our prototype, “Reframe,” was developed to empower creators to create rich 3D animations in a more natural, body-focused way using a VR headset that captures their body motion data and maps it to an avatar. With “Reframe,” a creator can streamline the production process and record their own body movements and facial expressions in real-time and edit their body movements directly in virtual reality. The animations they create can be exported and published as a final product or be further edited and integrated into a variety of workflows for video games or movies.

SIGGRAPH: Tell us more about the novel and intuitive VR animation authoring interface used in “Reframe.” How did you develop it? What sets it apart from other interfaces?

FA: “Reframe” was developed by our research team over the course of many ideations, iterative prototyping, and user testing sessions. From the early concept sketches, our goal was to remove the overhead that is normally found in 3D animation tools and develop a streamlined and powerful way to author animation that would support intuitive, natural interaction. With these goals, Reframe is not trying to replicate all the features and functionality found in complex tools, but rather provide an accessible and efficient tool for novices and experts to animate characters.

Reframe is built around the concepts of the “TimeTunnel,” “Trajectories,” and “KeyPoses.” The TimeTunnel spatially embeds the temporal dimension of animation in 3D space in front of the creator using Trajectories, which are 3D paths outlining joint movement over time. It enables them to see the motion unfolding right in front of their eyes and provides them with intuitive ways to directly view and manipulate that motion.

KeyPoses are automatically computed from the continuous stream of data recorded by the headset, and they provide the user with visual landmarks for the motion. Creators can directly manipulate the animation by selecting a KeyPose and reaching out and modifying the pose of the avatar — at that point using only their hands. “Reframe” uses inverse kinematics to automatically adjust the whole avatar based on how a creator moves each joint, and changes to one KeyPose are propagated to neighboring frames to allow for seamless editing. Unlike other interfaces, KeyPoses enable creators to focus on their intent rather than the complexities of the avatar’s motion, eliminating the need to manually create keyframes and providing creators with a simple way to navigate through an entire motion sequence.

With “Reframe,” creators can also modify the dynamics of a motion by grabbing adjacent KeyPoses and dragging them closer together to “compress time” (and thus speeding up a segment of trajectory) or pulling them farther apart to “expand time” (and thus slowing down a segment of trajectory).

SIGGRAPH: How does the system utilize advanced VR tracking to capture motion, expressions, and gestures?

FA: “Reframe” currently uses the tracking within the Meta Quest headsets to capture the creator’s body motion, facial expressions, and hand gestures. The Meta Quest 3 has optical cameras built into the headset to capture the position and movement and uses machine learning models of a user’s body and infer the location of their limbs in 3D space. As the Meta Quest Pro also has cameras inside the headset, it can capture a user’s facial and eye movements. “Reframe” uses both data streams to animate and manipulate a creator’s avatar.

SIGGRAPH: How do you envision “Reframe” being used in the future? What problems does it solve?

FA: We developed “Reframe” with two use cases in mind. First, for non-technical or non-professional creators, “Reframe” can be a useful tool to express oneself and create new types of content that, prior to “Reframe,” were unattainable due to the cost and complexity of motion capture equipment and animation software. These creators might share their animations on social media, online marketplaces, or just use “Reframe” as a means of creative expression.

Second, for professional users, “Reframe” can be a useful tool to complement their workflow. “Reframe” could be used to quickly capture motion data to understand how a scene might look during early scene layout. It can also be used for what animators refer to as “blocking,” where animators set up core poses and define a first pass for the motion. “Reframe’s” simple interface and ability to capture motion data without the need to access a motion capture setup make it a valuable tool to accelerate the development of 3D animation sequences.

SIGGRAPH:  What do you hope SIGGRAPH 2024 participants take away from interacting with “Reframe” in the Immersive Pavilion?

FA: We hope people who try “Reframe” at SIGGRAPH get a sense of what is possible with this prototype and reflect on the future of 3D animation. Being hands-on with the interface allows people to experience the capabilities and limitations of the motion capture technology, as well as the editing interface. For researchers, we hope it inspires them to think about new techniques and advancements to allow for even richer data capture or more precise control during editing. For creators, we hope that they will connect with the tool and think about what they would want to make with it, and how it could fit into their workflows. For all SIGGRAPH participants who visit our booth and use “Reframe,” we hope they experience what the next generation of character animation could be and see what is possible today through new motion capture technology and our editing interface. It is one thing to read our research paper or watch our video of “Reframe” in action, but it is an entirely different experience to put on a headset and reach out to bring an avatar to life.


Fraser Anderson is a senior principal research scientist and manager with the HCI and Visualization Research group within Autodesk Research. He is interested in novel interfaces and interaction techniques, including VR and AR, as well as developing new systems and workflows for creativity and design. He received his Ph.D. in human computer interaction from the University of Alberta, routinely serves on the program committee for the CHI and UIST conferences, and has published over 30 peer-reviewed papers in top journals and conferences within human-computer interaction.

Hilmar Koch leads the research for the Future of Media and Entertainment at Autodesk Research. With partners, research is exploring speculative scenarios and proofs of concept that might shape the media and entertainment industry. Right now, Hilmar is leading an effort to research how we might bring imaginative worlds to life. Prior to launching the media research effort, Hilmar worked on the strategic foresight team, also in Autodesk Research. Prior to Autodesk, Hilmar held positions as head of computer graphics, director of virtual production, and director of the advanced development group at Industrial Light and Magic and Lucasfilm. Hilmar’s film credits include “Avatar”, “Star Trek”, “Transformers”, “Star Wars III”, “Star Wars VII”, “Harry Potter and the Sorcerer’s Stone”, “Hulk”, “Perfect Storm”, “Galaxy Quest”, and “Pearl Harbor”. Hilmar has studied mathematics and art and he won an Academy Award for developing ambient occlusion for Pearl Harbor in 2010.

Qian Zhou is a principal research scientist at Autodesk Research in Toronto, Canada. Her research interest spans across spatial perception, novel 3D interactions, and interfaces. Before joining Autodesk she received her Ph.D. from the University of British Columbia, investigating the perceptual factors and 3D interaction in AR/VR with award-winning publications.

David Ledo is a Venezuelan-Canadian designer, scientist, and communicator working as a senior research scientist within Autodesk Research based in Toronto, Canada. David is part of the Human-Computer Interaction and Visualization Research Group, where he explores novel creativity support tools, authoring environments, and developing better understanding of practitioners’ processes to inform the design of future technologies. He also conducts research visioning and storytelling, where he is able to identify common threads across a myriad of disciplines to describe the current technological and societal landscape and how research activities might project into the future. David received his Ph.D. at the University of Calgary in 2020.

Aniruddha Prithul is a software engineer with over a decade of experience in video game and real-time interactive applications development. After obtaining his Ph.D. from the University of Nevada, Reno, in the area of virtual reality locomotion, Prithul joined Autodesk Research where he now focuses on bridging the gap between cutting-edge VR research and their implementations into real-world applications. When not busy programming games, Prithul loves to play them. His other hobbies include reading non-fiction, travelling, and building local tech communities.

Hans Kellner is a senior manager, principal engineer within Autodesk Research and has 30+ years of experience designing and implementing software. He has researched and developed various alternative input devices and his natural user interface (NUI) research has been prominently featured in the Autodesk Gallery, Autodesk University, Autodesk TechX, Microsoft Professional Developers Conference, and other venues. He is currently researching the use of emerging technologies and how they intersect with our customers and our products. When not focused on work, Hans may be found riding his mountain bike, hiking, or backcountry skiing.

Andriy Banadyha is sr. principal research engineer at Autodesk Research. His current focus is on the media and entertainment industries. In his career at Autodesk he has been involved in a number of projects spanning automative, aerospace, medical, and fashion industries. Prior to joining Autodesk, Andriy studied economics and journalism at Lviv’s National University.

Sebastian Herrera is a seasoned UX designer with over a decade of experience in the field. His diverse portfolio includes work in various startups, the video game industry, and the commercial aviation industry. For the past three years, he has specialized in user research. Currently, he is part of Autodesk’s Research organization, where he has been working on emerging technologies.

George Fitzmaurice, Ph.D. is a research fellow and heads the Human Computer Interaction and Visualization Research group. In collaboration with his colleagues he has co-authored and published over 120 research papers and been awarded over 95 patents. During the last 25 years his research has focused on technology-assisted learning systems, knowledge capture and retrieval, highly interactive visualization systems, AR/VR and novel input, and interaction techniques. Some notable research transfer and product contributions include Maya 1.0 UI, SketchBook Pro UI design, the 3D Navigation tools (ViewCube™ and SteeringWheels™), Autodesk Screencast, and Sketchbook Motion (awarded Apple iPad App of the Year for 2016).

Related Posts