The Magic Behind Spatial Control in AR

by | 2 November 2021 | Augmented Reality, Awards, Conferences, Graphics, Interactive Techniques, Research

© 2021 Daiki Taniguchi

Immerse yourself in the future. We sat down with Daiki Taniguchi, creator of SIGGRAPH 2021 Immersive Pavilion Grand Jury Award winner “Garage: GPU Particle-based AR Content for Futuristic Experiences“, to learn more about how interactive augmented reality (AR) is capable of capturing real-world geometric and visual information. How does this translate into particle-based AR content? Read on for the answer and to learn how the 3D AR in “Garage” can allow the players to manipulate their surrounding environment, almost like it’s magic.

SIGGRAPH: Share some background about “Garage: GPU Particle-based AR Content for Futuristic Experiences”. What inspired this project?

Daiki Taniguchi (DT): We have been exploring various methods on how to achieve an immersive experience for a while. Specifically, we proposed futuristic games that introduce optical camouflage and holographic expressions, and a method that ensures a sense of fusion by converting both the real and virtual space into a unified style using style transfer of deep learning. Those early projects were exhibited at SIGGRAPH 2018 and 2019. The approaches, however, were based on image processing and were two-dimensional. They lacked interactivity with physical objects, which is critical for an immersive experience.

In futuristic AR, we believe that both real and virtual objects surrounding the player should be interactive and controllable. To realize that, we decided to utilize scene-depth data captured by LiDAR. Moreover, Japanese Sci-Fi anime and manga have largely inspired us. For example, “PSYCHO-PASS” describes a near-future world, where people can redecorate their rooms at will by simply asking an artificial intelligence (AI) assistant. Another example is “Sword Art Online“, which depicts an AR game where an ordinary city turns into a burning land inhabited by magnificent monsters. These kinds of works make us very excited. To get closer to such a future, we started developing “Garage”, which aims to use the depth information to reconstruct a real space, and then control it in a creative way.

SIGGRAPH: Break down how you developed the particle-based AR system that led to “Garage”.

DT: “Garage” was developed almost entirely by one programmer, with two advisors participating in discussions. We had a hard time demonstrating the potential of our system; we went back and forth on whether it would be better as one cohesive game or multiple demos. In conclusion, we decided to implement a variety of demos, because we felt that we could not show the versatility of “Garage” if we limited it to just one game or storytelling aspect. I think we made the right decision in our goal to propose a form of future AR system.

SIGGRAPH: For users who are not familiar, can you talk a bit about the difference between particle-based systems and more conventional polygon mesh-based systems?

DT: The most important feature of particle-based systems is their flexibility of expression. Our goal was to reconstruct the real space, and then add creative processing to it. For example, you can make a wall fly apart violently when a dragon collides with it, add sound waveforms to the floor through audio visualization, or warp a part of the human body. For this kind of flexible deformation and destruction, polygon meshes are too grainy and uncool. Of course, polygon meshes may be able to achieve a similar effect if the triangles are made very small, but this causes problems with performance. The premise behind this is the desire and challenge to reconstruct real space in real-time, based on information obtained from LiDAR and RGB cameras, rather than using models prepared in advance by 3D modeling. In this case, constructing accurate and fine-grained polygon meshes from the depth information is a difficult problem in terms of performance. In the case of particles, on the other hand, it is relatively easy to reconstruct reality, as long as an algorithm can be prepared to calculate the world coordinates back from the depth.

SIGGRAPH: How do you envision the particle-based system being used in the future AR projects?

DT: “Garage” is still a growing project. We plan to keep the advantages of the particle-based system that we have mentioned so far, and to further evolve it to bring it closer to the futuristic world that you see in anime and manga.

SIGGRAPH: What challenges did you face while developing the final experience you presented to the SIGGRAPH community? When did use of GPU come into play? How and why was AI incorporated?

DT: In the first AR project in 2017, that we mentioned at the beginning of this article, we thought that making good use of the GPU to create rich expressions was a very important key to improving immersion in AR, and we’ve been paying attention and using them ever since. In that project, it was a simple use of the GPU to write graphics shaders. However, since our 2019 project, in which we made extensive use of compute shaders to run machine learning inference processing on a game engine, we’ve focusing on mastering GPGPU. Since then, we have been very interested in using GPGPU, so, in a sense, it was natural for us to make the most of the GPU in “Garage”.

Currently, we do not use AI in “Garage”, but we are considering using it in the future. For example, we are investigating the possibility of creating new worlds from the real world, using learning rendering algorithms such as NeRF. Specifically, NVIDIA Research’s GANCraft proposes a method to convert a voxel world built in Minecraft into a photorealistic world. In this way, we believe that we can create a new AR world by changing the appearance of the real world into something completely different, while only maintaining the input geometry information.

SIGGRAPH: What was your reaction to winning the Immersive Pavilion’s Grand Jury Award? And, as an award winner, what advice do you have for someone who wants to submit to a future SIGGRAPH conference?

DT: I was very surprised and pleased to receive the award [in a field] among many other great projects. We believe that the reason why “Garage” was able to win the award was because we had a vision of “creating a world where we can reconstruct and control the real space in a creative manner” [as opposed to] being stuck in the common perception that “AR is about placing objects on a surface”. SIGGRAPH is an amazing place where new ideas and technologies gather from all over the world. I believe that your project will be even better, if you don’t focus only on the technical aspects, but also add the perspective of: “What is the vision of my project?”

Submissions for SIGGRAPH 2022 will open in the coming months. Visit the website for more information about SIGGRAPH’s hybrid conference in Vancouver.

Daiki Taniguchi is a research engineer at Akatsuki Inc. His worked has been accepted to SIGGRAPH 2018, 2019, and 2021.

Related Posts

SIGGRAPH Spotlight: Episode 77 – Synesthetic Connections, Real-Time, and Immersive Experiences

SIGGRAPH Spotlight: Episode 77 – Synesthetic Connections, Real-Time, and Immersive Experiences

SIGGRAPH 2024 Courses Chair Ruth West chats with audio and graphics experts Aaron McLeran, Felipe Romero, and Max Hays, about synesthetic connections between real-time audio and graphics within games.