Visually Jamming at the Sphere

by | 3 December 2024 | Conferences

Image credit: Photo by Rich Fury, Sphere Entertainment

Get an in-depth look of the SIGGRAPH 2024 Production Sessions contribution, “Pushing the Limits: Crafting an Immersive Mega-Canvas for Phish’s Music Shows at Sphere™”, where Moment Factory was tasked with creating 12 hours of animations for Phish’s four nights at the Las Vegas Sphere.

SIGGRAPH: Discuss the modular show-flow that you developed for this project.

Guillaume Borgomano (GB): For the Phish show at the Sphere, we embraced a modular approach with the show-flow, which became the backbone of our production process. Phish’s reputation as a jam band presented a unique challenge — while their performances revolve around improvisation, the technical elements of a show, such as lighting, visuals, and all kinds of cues, typically follow a strict timeline.

To address this difference, we approached the problem as a “live VJing set” construction, creating a system that could adapt in real time and visually “jam” alongside the band. We had to bridge the gap between technical constraints and creative ambition, ensuring that every moment of the 12 hours of content was dynamic while supporting the band itself.

The solution laid in designing a hybrid system that combined pre-rendered and real-time content. For the baked content “classic approach”, we prepared an extensive bank of visual assets — a library of pre-rendered animations and loops. This allowed us to deploy highly polished visuals on the fly that we could cue to seamlessly adapt to Phish’s musical flow. These pre-rendered assets provided a strong foundation, ensuring that the direction of each song was captured and ready to be triggered when needed.

To get more live adaptability, we also turned to real-time rendering tools with Unreal Engine and Notch. These enabled us to create scenes that could run live, offering a level of flexibility rarely seen in such large-scale resolutions. By exposing parameters, we could manipulate the scenes dynamically during the band’s performances. This real-time improvisation allowed us to extend playback times and live control the imagery in sync with the music, effectively making the visuals as fluid and spontaneous as the band’s performance.

Every song in the setlist was approached with care, as we determined the best pipeline for its unique artistic demands. Some pieces leaned on pre-rendered assets to achieve intricate compositing and precise effects, while others relied on real-time scenes using the strong points of Notch for live IMAGS effects and interactive abstract composition while Unreal was used for responsive hyper-realistic looking environments.

This framework was more than a creative solution — it was a structural necessity. Each of the four nights was themed around a distinct state of matter (solid, liquid, gas, and plasma), requiring a staggering amount of content to bring these concepts to life. By breaking the show into modular features, we crafted a visual immersive experience that felt alive and connected to the music.

SIGGRAPH: How did you use AI to generate visuals and certain animations? 

GB: At Moment Factory, our innovation team has been exploring latent spaces for some time now. Driven by curiosity and a hands-on desire to push boundaries, we’ve continuously experimented with AI tools and generative AI to unlock its potential. A project like the Phish show was the ultimate playground to stress-test these explorations in a real-world production environment.

Our recursive deep dives into generative AI were instrumental in shaping how we optimized its use for the Phish show. Early on, we focused on identifying the best ways to leverage stable diffusion models, taking time to build workflows within ComfyUI that met our foundational criteria: quality, level of detail, resolution, consistency, and control. These early experiments allowed us to gauge limits and to develop specific multimodal workflows and “recipes” in ComfyUI that aligned with the creative direction of the show.

Through this iterative process, we refined our understanding of what worked, what didn’t, and where generative AI could be most effectively applied. It became a tool to direct and generate assets — ranging from textures, elements, and parts of environments to loops and animations — that were later assembled, composited, and post-treated into comprehensive scenes. This pipeline allowed us to balance the efficiency of automation with the nuance of hands-on craftsmanship, ensuring that every piece retained a distinctly human touch while benefiting from generative AI’s possibilities.

Our approach to generative AI is fundamentally rooted in human control and craft. Every workflow is built collaboratively with our artists and technical artists, designed to solve creative challenges while empowering the people behind the tools. This ethos ensures that the process is made by artists, for artists. Our goal isn’t to replace human abilities but to enhance them — amplifying creativity while preserving the artistry and intention that define our work.

SIGGRAPH: How were you able to create 12 hours of animations for Phish if you were not able to test it out at the Sphere beforehand? Did you create a prototype of some sort?

GB: Creating a huge amount of content for the Phish shows without direct access to the Sphere for testing was a significant challenge, but one we approached with two key previsualization strategies: VR-based preview and a physical mini sphere mockup.

For the VR approach, artists were able to experiment with different mappings UV-templates to determine the best fit for their scenes and get an understanding of animation speeds and scales. We could direct output scenes to previsualization, encouraging more frequent use and significantly enhancing the creative process. The ability to view content in the immersive, spherical format — even virtually — helped our team make informed creative decisions early and often, especially with the generative AI content. By enabling quick reviews of work in context, VR previews streamlined the feedback loop and ensured the final visuals felt cohesive and polished.

Complementing the VR workflow was the mockup sphere, a physical dome prototype at our facilities with a resolution of 2160×2160 pixels. This setup allowed us to simulate the Sphere’s massive, curved LED canvas on a smaller scale. It became an invaluable tool for refining the final workflow, accommodating VR motion sensitivity, and validating progress at various stages of production. By gathering the team around this tangible representation of the Sphere, we could review, adjust, and confirm that the visuals would translate effectively to the full resolution dome.

Together, these two previsualization methods enabled us to work iteratively and collaboratively. By testing content within simulated environments, we ensured it would scale easily to the Sphere’s unique format while emulating a fluid and dynamic creative process, ultimately delivering an immersive experience that aligned with the vision of show director Abigail Rosen Holmes, as well as the band’s longtime lighting designer Chris Kuroda, with whom we collaborated closely.

SIGGRAPH: You had a three-month timeline for this project. What were some of the hurdles you encountered, and how did you overcome them?

GB: In total, we had a setlist of 68 songs, spanning 12 hours of showtime, all delivered in 16K resolution. The challenges we faced in terms of scalability, optimization, and cross-team coordination were immense … but we made it happen.

Tight collaboration between our creative, technical, and production teams was key to our success. This synergy allowed us to troubleshoot effectively, align on creative goals, and deliver a seamless experience that matched the ambitious scale of the project.

The size and complexity of the Sphere’s canvas forced us to rethink our “standard” techniques and processes entirely. Ensuring smooth frame rates and avoiding performance bottlenecks — especially for real-time elements — were critical challenges. To address these, we prioritized early testing, adopted modular workflows for quicker iterations, and implemented a hybrid approach that combined pre-rendered and real-time content.

These solutions would not have been possible without the collective ingenuity, dedication, and hard work of everyone involved. We are very thankful for all the collaborators we’ve been able to work with, including Disguise, Fuse Technical Group, Fly Studio, Myreze, Sila Sveta, Troublemakers, Picnic Dinner Studios, and Totem Studio.

SIGGRAPH: How does the process of creating animations for the sphere shape compare to a typical flat LED wall?

GB: Designing for the Sphere required a nuanced adaptation of our usual approach—not because the template, a dome, is an overly complex technical challenge, but because of the unique relationship between the spectator perception, the screen’s massive scale, and its immersive wrapping nature.

That said, this wasn’t a significant departure from the kind of mega-canvases and unconventional surfaces we’re accustomed to working with at Moment Factory. The core creative process remained consistent, focusing on how to leverage the surface’s physical geometry to orchestrate illusions. By understanding how the space interacts with light, motion, and perspective, we were able to create visuals that were both immersive and emotionally resonant.

While the parameters shifted to match the Sphere’s format, our methodology — crafting show visuals that complement rather than dominate — remained fundamentally the same, even as it would with a flat LED wall. One of the biggest challenges wasn’t simply mapping visuals onto the dome but ensuring they worked in harmony with the music. The content had to act as a “fifth member” of the band, amplifying the energy of the performance while ensuring the musicians remained the focal point. This required artistry in how space was occupied and shaped. We created spatial experiences using tricks and visual illusions, carefully balancing moments of intensity with the importance of negative space. This allowed the visuals — and the audience — to breathe, giving rhythm and depth to the narrative without overwhelming the performance.

SIGGRAPH: Tell us more about the “VJing-style” control toolbox you developed. Was it the first of its kind? What went into its creation, and what’s next for this type of toolbox? 

GB: At its core, the system relied on a Disguise playback setup comprising multiple render nodes. These nodes operated as a synchronized cluster, genlocked to maintain precise timing and capable of handling the Sphere’s immense 16K resolution. This setup allowed us to ingest and stream pre-rendered media alongside real-time content, all of which could be layered and composited live. Up to three layers could be combined during the show, including one real-time stream, giving us significant creative flexibility.

Control was a crucial part of the system’s design. Using the DMX protocol, we integrated a toolbox directly into the GrandMA lighting console with a standard approach for launching all kinds of video cues (as a VJ would do with a MIDI board). However, we extended the “cue” thinking to other realms by exposing parameters in our RT scenes, allowing the person operating the board to have certain control over real-time visuals. For instance, we could live adjust visual aspects of the Notch and UE scenes, such as saturation, animation speed, particle emitters, or compositing modes, ensuring the visuals were ever evolving and remained harmonized with the band’s performance.

Beyond the toolbox itself, in the future we could further “complexify” the setup by introducing even more granular control over exposed parameters, enabling us to fine-tune aspects of the visuals with greater precision.

Alternatively, we could implement a system to capture live environmental data from the show, using it as inputs to allow the real-time visual content to respond dynamically, not only to the operator but also to the energy of the performance and the audience.

Another path could involve integrating alternative forms of control, reactivity, and interactivity through various input types — such as gesture-based interactions to provide a different way to manage visuals.

We could even be designing “video game”-like scenes controlled by a standard video game controller. This would allow the operator to essentially “play” a video game level and evolve in it inspired directly by the music. Such an approach could bring an entirely new level of creative spontaneity for live performances. 

Submissions for the SIGGRAPH 2025 Production Sessions are open. Visit our website to learn more.


Guillaume Borgomano is a creative lead and multimedia director at Moment Factory, specializing in immersive environments, large-scale projects, and 3D mapping. With a passion for blending creativity, technology, and art, he explores innovative techniques like real-time interactivity and generative AI to craft unique audiovisual experiences. Known for his thoughtful and collaborative approach, Guillaume thrives in multidisciplinary settings, creating captivating works that seamlessly merge real and virtual worlds.

Related Posts