An ‘Irresistible’ Way to Make Films: Using Real-Time Technology for ‘Book of the Dead’

by | 19 March 2019 | Conferences, Film, Production, Real-Time, Software, Visual Effects

“Book of the Dead” © 2018 Unity Technologies (Sweden)

Real-time technology is enabling creators to have projects at their fingertips through all stages of the process — and that includes filmmakers.

“Book of the Dead,” created by the Unity Demo team, is an interactive short film that features a journey through a photorealistic forest inhabited by mystical creatures. It was shown on the Real-Time Live! stage at SIGGRAPH 2018 in Vancouver, demonstrating how virtual production can be utilized to shoot a sequence for a short film. (The short film was also a selection of the SIGGRAPH 2018 Computer Animation Festival.) The real-time presentation consisted of a simulation of a live shoot using Technicolor’s virtual production platform, Genesis, which has been successfully employed on multiple film productions. Since Genesis is powered by Unity’s real-time 3D development platform, the “Book of the Dead” team was able to boast the advances in real-time rendering quality inside a beautiful virtual environment. Check out the beauty for yourself in the teaser below.

We had the pleasure of taking a deep dive into the creation of “Book of the Dead” with three members of the team: Veselin Efremov, creative director for the Unity Demo team; Torbjorn Laedre, technical lead on all Unity Demo team projects; and, Franceso Giordana, principal architect at MPC and development lead for Technicolor Genesis. Discover how real-time technology is revolutionizing film production.

SIGGRAPH: Let’s start at a high level. What is unique about “Book of the Dead” in terms of real-time technology?

Veselin Efremov (VE): First of all, the massive improvement of visual quality that is brought about by Unity’s new Scriptable Rendering Pipeline (SRP). “Book of the Dead” uses the High Definition Render Pipeline (HDRP), which — from an artist perspective — just automatically makes things look nice. Working with physically correct assets through photogrammetry and scanning ensures that what I’m seeing is as high fidelity as it gets, and what the engine does remains in the background. There isn’t anything special I have to do — it preserves this level of fidelity through the rendering and does that in a very performant way. This hasn’t been possible before. Typically, the process would involve some adjustment to compensate for the shortcuts that the rendering must take for the sake of being performant, but now we have performance at this level of quality and it just works.

The other aspect is the virtual cinematography process, which is made possible by Genesis. The way I see it, these tools bring the freedom we enjoy in game development to the world of filmmaking. They give directors and cinematographers a way to have more control, iterate on their creative vision, and nail every possible aspect of their shot. Creators can experiment easily and really discover the best version out of trying many alternatives. Without a real-time 3D platform, there is no way to do this.

The way I see it, these tools bring the freedom we enjoy in game development to the world of filmmaking. They give directors and cinematographers a way to have more control, iterate on their creative vision, and nail every possible aspect of their shot.

As a creator with a background in game development, I am working in a way that is very non-traditional if you compare it with how films or CG productions are made. I am used to this freedom; I take it as granted that my project — in this case, my entire film — is always at the tips of my fingers, and that I am working at any moment with any given component of the film, from camera motion and shot blocking to lighting and post processing. No matter what I do, I am able to see the result of my action immediately, in the context of all other components of a shot, and in the context of the entire film. If I change a light, for instance, I just press “play” and see a new version of the film where the lighting in this shot is different. From a game development perspective, you’d think, “Well, but of course. How else would you work?” But film production is so much more disciplined. Nowadays, virtual cinematography is getting more adoption, thanks to pioneers like MPC and related software tools, and I am so excited to see this happen.

SIGGRAPH: Getting into the technical, what exactly is Genesis? How does it work?

Francesco Giordana (FG): Genesis is Technicolor’s virtual production platform, developed over the last two years at MPC. It focuses on making the filmmaking process, from previs to final, as seamless as possible. It’s not exactly a single piece of software, but rather a configurable distributed system with a multitude of moving pieces interacting with each other in real time.

At the core of this system is a robust data pipeline, which we think is Genesis’s most distinctive feature. From the moment assets are created to the end of virtual production when shots are turned over to VFX, every asset, creature, camera change, lighting change, or minor modification to a scene is tracked, timecoded, and recorded. This recorded data is then catalogued and labeled inside our database and remains available for future iterations.

Every asset, creature, camera change, lighting change, or minor modification to a scene is tracked, timecoded, and recorded.

This careful handling of data, coupled with a robust networking system, allows for highly collaborative workflows — another differentiating factor. Filmmakers and their crews can work together live, on or off set, on a shared scene that is synchronized across any number of computers. Thanks to Technicolor’s global network infrastructure, these computers can even be located at different sites around the world, without losing that feeling of “togetherness.” In the case of “Virtual Production in Book of the Dead” at SIGGRAPH 2018’s Real-Time Live!, real-time visualization of CG elements on stage was a major selling point for us, so we decided to rely on an existing real-time rendering solution: Unity.

SIGGRAPH: This was a very collaborative project. How did the team for “Book of the Dead” come together?

VE: The Unity Demo team was started in 2012 with the mission to drive Unity’s technology forward, and with the help of actual content productions. I joined in 2014 to create Unity’s first in-house produced real-time short film, “The Blacksmith,” released in 2015. We were only four people on that production. Gradually, the team grew to include some of my favorite creators who had a lot of experience in the game industry, as well as other experienced people from Unity. The next year, we shipped “Adam,” which was so well received that it got two sequels, while we went on to create “Book of the Dead.” Nowadays, the team consists of 12 people, half artists and half programmers. We collaborate remotely across several countries and offices out of Stockholm, Copenhagen, London, Sofia, Vilnius, and Umeå.

The mission of the team hasn’t changed. We make creative projects in the form of real-time short films or interactive experiences, based on our own IPs, which we develop internally. These projects are designed to push the technology in various ways beyond its present-day capabilities. We look for what is difficult to do and what people may think is impossible, and we start doing it to make it possible. “Book of the Dead” was a project that you couldn’t do back in 2017; it wouldn’t have looked like it does. But by the end of 2018, a Unity user could grab the engine and create something that looks exactly like that.

We look for what is difficult to do and what people may think is impossible, and we start doing it to make it possible.

FG: Unity’s real-time 3D development platform is a flexible and extensible framework, built in a way that allows plenty of room for custom development. Around the time “Book of the Dead” was presented at Real-Time Live!, Unity was gearing up to roll out the HDRP, a major update to their rendering pipeline aimed at providing a massive improvement of the visual quality achievable in real time. It so happened that this timing was perfect, since we were looking for a way to prove to the film industry that our virtual production platform was capable of producing similarly compelling imagery.

Because this new technology was not yet publicly available, MPC was granted early access by Unity, along with plenty of support for getting it to a working state in time for SIGGRAPH 2018. Veselin Efremov was in charge of the content, while our team focused on creating a compelling presentation that would showcase our Genesis technology in the best possible way — live on the Real-Time Live! stage.

The assets and environment within “Book of the Dead” were created and provided to us by the Unity Demo team. Initially, making everything functional — and, more importantly, trackable — in our system required quite a bit of work. Because the assets weren’t a part of our pipeline from the start, this hurdle demanded we build brand new tools within Genesis to “legalize” third-party content. Now we can cover similar use cases, too.

SIGGRAPH: How does the technology behind “Book of the Dead” tie in with filmmaking, and, more specifically, virtual production?

FG: First of all, it’s quite common in film production to collect a scan of real locations and environments in which the crew will be shooting. This can include lidar scans of a real set where live-action plates are shot, or scans of miniature models made by artists, or libraries of common elements sourced from photographs. Photogrammetry is another common practice in film production and VFX, where it is used for the acquisition of assets with photorealistic accuracy. The resulting 3D assets are then used for the construction of 3D (CG) environments, which serve as virtual sets or set extensions in films. Many of the assets you see inside “Book of the Dead” were acquired through photogrammetry.

SIGGRAPH: Let’s explore that. Torbjorn, can you walk us through the photogrammetry process for “Book of the Dead”?

Torbjorn Laedre (TL): The number of pictures needed for a good photogrammetry capture depends on the object being scanned. Sometimes as few as 30 photos can be enough, and in other cases you need to take more than 1,000. For the trunks of pine trees, for instance, it was roughly 100. With smart processing algorithms — we used RealityCapture — it’s possible to get high-quality results, even from captures with consumer equipment. I have a pocket camera that’s 15 years old, which I use to take my family’s vacation photos, and an iPhone 4S. I’ve done photogrammetry captures with both. I don’t have any specialized photography equipment at all.

Working with scans gives you the highest level of realism — after all, nothing comes closer to nature than nature itself.

And, because the technique is so accessible, you can be opportunistic. I am lucky to live in an area with beautiful nature — in a green, Stockholm suburb — and very close to a forest. While I’m out on a walk with my kids, I may spot an interesting object and think, “Oh, this trunk or rock would fit perfectly in this scene that I’m working on,” and I stop for 15 minutes to snap some photos of it. The ideal conditions are if it’s overcast or the object is in a shadow. Directional light (and its shadows) and lighting changes are bad for photogrammetry. It is possible to salvage such photos with the help of Unity’s De-lighting tool, which has an algorithm that removes the light information. So I don’t do too much planning; though, of course, when it’s overcast, I am more likely to go, “Hey kids, let’s go take a walk!”

I should also add that it is not necessary to rely only on oneself to capture everything with photogrammetry. In all of our high-fidelity projects, the huge library of Quixel Megascans has been our go-to starting point. Working with scans gives you the highest level of realism — after all, nothing comes closer to nature than nature itself. We also had a dedicated collaboration with Quixel for “Book of the Dead,” and they made all of the small shrubs and plants that populate our forest environment. The Megascans library has various ready-to-use objects, which give a lot of variety to work with and to build various biomes.

SIGGRAPH: Indeed, forests are anything but minimalist. “Small shrubs and plants” need to be everywhere. How did you add enough small details throughout the landscape to make the experience feel real?

TL: To achieve natural-feeling distribution of detail throughout the landscape, we combined smart, context-aware object scattering with physics simulation-based tooling.

Smart and context-aware scattering meant that any time a pine tree was placed in the world, it would pick a random tree variant and populate it with unique properties. Automatic rules attached to the tree would dictate that branches, twigs, cones, and needles accumulate around the base of a pine tree over time. Before adding those detail objects, the rule set would look at the properties of the placed tree as well as the environment around the tree and tweak the outcome accordingly. The outcome would be different depending on the slope of the nearby terrain, which type of ground was at the base of the tree, whether there were other trees or objects underneath the tree crown and so on.

Once the automatic distribution rules had indicated which detail it would like to place where, the tool would run an actual physics simulation on these objects to allow them to fall from the tree crown and settle naturally in the environment. To complement this automatic distribution, we also employed explicit physics brushes based on the same principle. Environment artists could pick from group presets, and either drop or spray randomized selections from these into the world. Again, these would simulate physics until naturally settled in the environment.

Basing environment population on these types of tools not only saved us having to hand-place a huge amount of these types of objects in the world, but also avoided the risk a natural environment looking man-made and artificial.

FG: Yes, it’s exciting that Unity’s new HDRP is physically based, meaning that virtual lights and materials interact with each other in such a way that follows the laws of physics. This aspect is essential when working with an experienced cinematographer. With the representation of accurate lighting and reflections available in-engine, we have the capability to manipulate light sources in the same way you would on a real set with real lighting.

In order to bridge the gap between live-action shooting and virtual production, MPC, in collaboration with Technicolor R&I, developed a series of rendering solutions that approximate the functions of a traditional physical camera. We concentrated specifically on establishing a realistic “depth of field” effect. This means our virtual cameras can operate exactly as a physical camera would, and the visual response in the image resembles very closely that of a real camera. This, along with the integration of a variety of physical hardware devices, aids experienced camera crews in working together on a virtual set just as they would in live-action. 

With the representation of accurate lighting and reflections available in-engine, we have the capability to manipulate light sources in the same way you would on a real set with real lighting.

SIGGRAPH: How does Genesis cater to experienced filmmakers who are less interested in virtual equipment and prefer to operate traditional film hardware?

FG: When we work with an experienced live-action crew, we’re fortunate enough to have at our disposal decades of experience in filmmaking using real cameras and practical sets, so we find that it’s important to provide them with tools that feel familiar to them. We’ve managed to encode a wide range of traditional camera equipment, like cranes, fluid heads, and dolly tracks, and took measures to ensure these tools feel the same as the hardware they’re accustomed to.

One of the drawbacks of using encoded hardware in lieu of the traditional route is forfeiting physical limitations. Traditional filmmakers are familiar with certain shortcomings of film hardware. A good example of this is focus pulling — on a real camera, there is not yet a robust way to auto-focus on specific elements of a scene, so this remains a manual operation, which requires a skilled AC. A virtual camera, on the other hand, can be set to auto-focus on any point in the scene. Depending on the filmmaker’s preference, we may decide to replicate manual focus pulling on our virtual camera. This preserves our filmmaker’s creative vision and enables them to work in a way that feels natural.

SIGGRAPH: What kinds of tools does Genesis use to track through and manipulate “Book of the Dead” and other similarly designed scenes in real time?

FG: We’ve developed a USD-based pipeline for moving data across the different applications that comprise the Genesis platform. USD is an especially powerful open-source scene description format that was developed by Pixar, which allows us to compose lightweight scenes with incredible complexity. In practice, USD offers flexibility in how you represent your data with hierarchies, layers, and other constructs. This pipeline allows us to travel back and forth between our DCCs — such as Maya, Houdini, and MotionBuilder — and the Unity game engine to simplify content iteration.

Additionally, we’ve attached our virtual production pipeline to MPC’s existing VFX pipeline to ensure that, when the virtual production phase has ended, everything that was shot on stage can be loaded up straight into layout. Usually this is the most tedious portion of the filmmaking process, but our time and effort spent standardizing our pipeline has made it so this is one of our strongest points.

When the virtual production phase has ended, everything that was shot on stage can be loaded up straight into layout.

Another exciting component is the live generation of data during shooting. This data is streamed over the network, distributed between different users, and is being constantly recorded to ensure every take can be successfully reconstructed further down the line. We use this tool primarily for generating animations for virtual cameras, but its core function is to capture every change that takes place on the virtual stage. These changes could be set dressing, lighting, or even tweaking the parameters of a procedural effect, such as wind speed and direction. We complement this objective with a system for remote triggering of an arbitrary number of external recording systems, such as motion capture or A/V recording. This results in proper capturing, labeling, and assetizing within our database at the tail end of a shoot.

SIGGRAPH: Let’s talk about the wind, which you just mentioned. It really does look natural in the short film. What is innovative about the treatment of wind in “Book of the Dead”?

TL: One of the things we focused a lot on was capturing the feeling of global wind moving coherently through the environment. To achieve this coherence, we use a single procedural noise function for every piece of vegetation that existed in the world. We were able to further improve unique variation by re-sampling this function for every hierarchy level in a tree — trunk, branch, twig, needle cluster — instead of just applying a single sample for each instance.

In order to help guide the procedural simulation closer to real-world behavior, we decided to export the entire environment and run an offline flow simulation through the forest. From this simulated data set, we imported a volumetric snapshot back into Unity, and applied it in the noise function to modify the procedural intensity and turbulence seen throughout the world.

On the content side, every piece of vegetation was modeled and configured with physical properties of how it should react to external force. We wanted to make sure we could capture natural-looking bending from wind acting on grass, bushes, branches, and trees, as well as give the feeling of inertia to gusts of wind. At the apex strength of wind gusts, an additional level of internal animation was added to really accentuate the intense rustling and fluttering of blades of grass, leaves, and needle clusters you can observe in nature on a windy day.

We wanted to make sure we could capture natural-looking bending from wind acting on grass, bushes, branches, and trees, as well as give the feeling of inertia to gusts of wind.

SIGGRAPH: What are the limitations, if any, of Genesis in productions? Is there a difference in the platform’s application between CG and live-action films?

FG: To date, we have focused solely on full CG shoots, but there is nothing about this technology that prevents us from using Genesis on a live-action shoot. The main difference would be the integration of live plates with computer-generated images in real time. Currently, this would require a live comp, but we have made integrations into our system that allow for external solutions to this challenge. For example, our R&I team is currently working on the integration of LED backdrops behind live actors during live-action shoots. With the use of these high-resolution walls, we’re able to modify backgrounds in real time for the use of maintaining accurate lighting on actors, or even use them as a final background in our live plate.

SIGGRAPH: Put simply, what excites you about this work?

VE: What excites me is the freedom to craft stories and worlds that I find interesting and to explore visual styles and narrative themes that I care about, all under the thrilling challenge of making whatever we do look better than what was possible before. When it comes specifically to virtual cinematography, it just feels so empowering — being able to achieve exactly the shot you envision, see the result immediately, and be able to tweak every component of it. Virtual cinematography liberates the entire creative process, both the director and everyone around them. It can fundamentally change how collaboration looks like in a creative team.

In real-time production, I don’t have to imagine, or even plan, the entire film beforehand and then work with my teammates to execute my vision. I can be creative during the entire production process — I can explore variants, change my mind, and discover interesting ideas as I work. As a writer and director, I can see how the others are doing the same, and I can be very opportunistic about catching a nice idea that comes from animation, from an actor’s performance, or from a lighting or rendering feature, and incorporating that into the narrative and making it a part of the film easily and fluidly, as if it was always intended to be that way. This freedom widens the window of creativity. Inspiration and ideas come from all directions.

Virtual cinematography liberates the entire creative process, both the director and everyone around them. It can fundamentally change how collaboration looks like in a creative team.

It’s exciting to think that real-time tools for film production will get increasingly broader adoption and will change how films are made. They’re bound to, because this way of working is irresistible.


Submissions for SIGGRAPH 2019 Real-Time Live! close on 9 April. Share your real-time project today!


Veselin Efremov has been creative director for the Unity Demo team since 2014. He wrote, directed, and art-directed the company’s award-winning real-time short films “The Blacksmith” (2015), “Adam” (2016), “Book of the Dead” (2018), and, most recently, “The Heretic” (2019). With a background as an artist and art director in the game development industry, he is now working at the intersection of game and film, using real-time technology for the creation of cinematic experiences.

Torbjorn Laedre has been the tech lead on all projects from the Unity Demo team since 2014. Previously, he spent a great many years wrangling code for a bunch of well-known AAA engines and games. He has held several lead and principal roles in projects across DICE, Ubisoft, and EA, and in later years shipped a couple of games built on Unity. These days, his code check-ins are mostly delivered by carrier pigeons from somewhere deep in the Swedish wilderness.

Francesco Giordana is a principal architect at MPC with a long career in both games and VFX, focusing on the use of real-time technologies applied to filmmaking. Having worked on both AAA games and Oscar-winning films, he is now dedicated mostly to virtual production, as the architect and development lead for Genesis, Technicolor’s virtual production platform.

Related Posts

Hair-raising Innovation

Hair-raising Innovation

Pioneering a new era in real-time hair rendering, the SIGGRAPH 2024 Technical Paper “Strand-Based Hair Modeling and Rendering with Mesh Shaders” introduces innovative methods that redefine efficiency and realism in computer graphics.