The Search for Realism in Holographic Displays

by | 21 October 2021 | Emerging Technologies, Research, Students

© Aaron Demolder/VividQ

Aaron Demolder, a research engineer and doctorate candidate in his final year at Bournemouth University, recently earned first place in the graduate division of the ACM Student Research Competition during SIGGRAPH 2021. Read on to hear from Aaron on his winning research — “Enabling Reflective and Refractive Depth Representation in Computer-generated Holography”, part of the Posters program — which focuses on realism in holographic displays and the road to SIGGRAPH 2021.

SIGGRAPH: Your winning poster centers around display technology. Tell us a little bit about the research.

Aaron Demolder (AD): The aim of this project, and my doctorate, is to advance the realistic viewing experience in holographic display — with VividQ, the company who I am collaborating with. Computer-generated Holography (CGH) is a technique for the re-construction of three-dimensional imagery through diffraction and interference of light. However, rendering a full “native” hologram creates a significant computational problem, and is underdeveloped in terms of shaders/light transport techniques. Visual results for this technique are worlds away from the modern level of fully featured production renderers. 

By using depth sliced/image-layer based CGH and a slew of other tricks, it’s possible to cut a lot of these corners and deliver an approximation of the same experience much faster. This is where my hosts VividQ excel. By inputting RGBZ (Color + Depth) images from existing computer graphics raytracing or rasterizing renderers into a hologram generation suite, and dividing this up into discrete layers, you get the benefits of the fully featured renderer whilst ensuring hologram computation is swift.

But, of course, as any compositing artist will know, having just RGBZ data is not nearly enough information to represent the real depths of objects in a scene, which have material properties such as reflections (mirrors) or refractions (glass, water), for example.

The solution to this is some additional depth data coming from the renderer combined with a process that looks very much like the visual effects compositing workflow, but instead of working in 2D, the technique works in the hologram (frequency) domain. By generating holograms of each of the reflection/refraction elements of the render independently, it’s possible to assign each its own correct depth information in the final hologram. The result is the ability to display objects with realistic materials, which include focal properties.

SIGGRAPH: Share with readers how you developed your poster. What challenges did you face or overcome?

AD: For me, the research process is something production-led or goal-led, with creative and practical elements — so very much chasing a specific visual result and certain ease of use. The goal in this case was to form a hologram that achieves reflective and refractive representation. I noticed this was something of significance that was missing in the research. Working toward the best method was a case of trying numerous avenues and weighing the pros and cons.

The virtual nature of the conference didn’t change my research route so much; either way, the results needed to be clear and concise, in text and in images produced, no matter if they were viewed on a in-person holographic display or virtually!

One of the challenges I faced was balancing a lot of skills to produce the images — designing the optical elements in 3D to be physically correct, 3D modelling the rest of the scene, doing realistic look development, adjusting code and nodes in Maya to get the extra depth information I needed from the renderer, extracting the render layers, and editing them in Nuke to look their best. From there, I was writing and adjusting the code to generate the holograms and finally produce the final image results! There’s a fair amount of trial-and-error involved, but this production-led approach meant I encountered practical problems naturally and could come up with appropriate practical solutions.

SIGGRAPH: How do you anticipate your research will be used in the future? What problems does it solve?

AD: The key problem solved is there is now a correct representation of depth when light interacts with reflective and refractive surfaces in CGH imagery. It sounds subtle, but really adds a lot to the natural look and feel of an image — as well as being a significant tool for storytelling and composition. As CGH makes its way into the mainstream, it would be great to see access to depth passes for arbitrary output variables built into production renderers so that this technique can be utilized more.

SIGGRAPH: What do you find most exciting about the final product you presented to the SIGGRAPH 2021 community? What’s next for “Enabling Reflective and Refractive Depth Representation in Computer-generated Holography”?

AD: I think there are two things: first, how simple, yet effective, the method is, and second, how compatible it is with existing workflows. I’m also quite pleased with the demonstration image as it really highlights the benefit of such a technique, and also looks quite pretty while still showing what more needs to be done. As mentioned in the conclusion of the poster abstract, I’d like to develop some solid methods for occlusion and see the reflection/refraction work sit in the larger picture of what’s possible with CGH.

SIGGRAPH: Talk a bit about your reaction to winning first place in the ACM Student Research Competition. How might receiving this award help you in your career post-doctorate?

AD: My first reaction was primarily over the moon and honestly quite surprised! There’s a lot of excellent research submitted each year to the ACM Student Research Competition, and to be selected out of all of them really is an honor. Going forward in my career, I imagine the award will be great recognition of the type of quality and relevance of research I can produce.

SIGGRAPH: What advice would you give to other graduate students thinking of submitting to an ACM SIGGRAPH conference/competition?

AD: Do it! It will not only give additional purpose to your project if you intend to submit, but it will also ensure you are rigorous with your methods and push the quality you attempt to deliver in the results. There is no downside, and being recognized for your work is a bonus.

Interested in SIGGRAPH? Check our SIGGRAPH 2022 website to learn more about what’s to come!


Aaron Demolder is a digital media engineering doctorate (EngD) candidate at the EPSRC funded Centre for Digital Entertainment (CDE) at Bournemouth University in the United Kingdom. He is currently completing his EngD whilst embedded within VividQ Ltd in Cambridge, with a focus on better incorporating emerging technology into art-driven pipelines. Aaron holds a B.A. (Hons) in computer visualisation and animation from the National Centre for Computer Animation. Aaron’s work at VividQ balances between experimenting with content for the new generation of display technology and driving improvement of the holographic generation system that enables it.

Related Posts

Hair-raising Innovation

Hair-raising Innovation

Pioneering a new era in real-time hair rendering, the SIGGRAPH 2024 Technical Paper “Strand-Based Hair Modeling and Rendering with Mesh Shaders” introduces innovative methods that redefine efficiency and realism in computer graphics.