Out of this World: New Image Capturing Technique Replicates Space Exploration

by | 21 November 2023 | Conferences

Image Credit: Copyright Adobe & Smithsonian

Explore the groundbreaking digitization of Alan Shepard’s Mercury spacesuit in this interview with Romain Roufet, Jon Blundell, and Tamy Boubekeur — creators behind the SIGGRAPH 2023 Talk “Making a Digital Double of Alan Shepard’s Space Suit.” They share how they were able to bring historical artifacts to life with unprecedented accuracy. This digital double not only preserves the past but sets a new standard for digitally conserving delicate or complex items. Read more to join the conversation on the future applications of this transformative technology in museums and beyond.

SIGGRAPH: Share some background about this project. What inspired the selection of creating a digital double for this object, this particular space suit?

Romain Roufet (RR): The project was motivated by the desire to digitally preserve a significant artifact in the history of space exploration: Alan Shepard’s Mercury spacesuit. This suit represents an important milestone in the United State’s first foray into human spaceflight. The goal was to create a digital replica of the suit as it appeared in 2022, serving both as a baseline for future conservation efforts and as a means to provide public access to this historic artifact.

Jon Blundell (JB): This project was a continuation of the National Air and Space Museum’s “Reboot the Suit” initiative. We partnered with the Adobe team for this project because of the complexity of the suits surface qualities which posed challenges to a standard capture approach. We knew that our friends at Adobe had the tools and expertise to help create a highly detailed digital double of this incredible object.

SIGGRAPH: Let’s get technical. Tell us about the capture methodology you used. Why did you choose a hybrid approach to the normal/displacement capture? What did that entail?

RR: The hybrid capture approach, integrating photogrammetry and machine learning, was essential due to the limited availability of the spacesuit for conservation purposes. The method was designed to be time-efficient, circumventing the extensive workload that a purely photogrammetric approach would entail with its requirement for numerous close-up shots.

The choice of varied lighting conditions played a pivotal role in this process. Each lighting setup — ambient, diffused strobe, and cross-polarized — served a specific purpose in revealing distinct characteristics of the suit’s materials. Ambient light provided natural and even illumination, highlighting the suit’s general appearance and texture. The diffused strobe lighting accentuated the reflective and metallic qualities of the suit, offering a clear view of how the material interacts with more intense light. Finally, the cross-polarized lighting was crucial in minimizing specular reflections, allowing for a clearer observation of the suit’s surface details and textures.

These diverse lighting scenarios were instrumental in various aspects of the digitization process. While ambient lighting was critical for the machine learning phase, enabling the extraction of detailed features like the texture of the fabric and stitching patterns, the other lighting setups played a key role in material analysis. The diffused strobe and cross-polarized lighting conditions were particularly useful for creating Physically Based Rendering (PBR) maps. These maps, such as specular, glossiness is necessary for realistically simulating how varied materials interact with light. This comprehensive approach to capturing the suit under different lighting conditions was important for creating a detailed and material-accurate digital double.

Tamy Boubekeur (TB): Compared to typical cultural heritage assets, the suit exhibited several challenges that were motivating for us: conductor materials, geometric structure at multiple scales, manufactured hard surfaces and softer components, high degree of reflectivity, etc. We took these challenges as an opportunity to demonstrate a specific capture and reconstruction workflow that uses machine learning to enhance and improve the solid basis we have with photogrammetry, rather than replacing it completely. Interestingly, the final model is a fully fledged textured mesh with PBR materials, ready to use in any 3D package natively.

JB: I think Romain and Tamy have done a great job covering why we used the three lighting conditions and machine learning approaches, so I’ll just touch on the physical setup. For the diffuse data sets we essentially created a large light box around the suit using a combination of scrims to diffuse the incoming light and bounces to continue to reflect the light that made it in around the light box. For the cross-polarized capture, we had the opposite goal, we only wanted light that directly reflected off the suit from our polarized light source to be picked up by the camera. In this scenario the suit was placed in a cavernous warehouse room with high ceiling and enough floor space to position the suit far from the walls. We then surrounded the suit in a mote of black velvet to trap any light that would have bounced off the floor. There was of course no way to mitigate secondary bounces off the suit itself. The resulting image set was very interesting as it really lets you see the self-reflections in the suit, it’s a very intuitive way to understand how light travels.

SIGGRAPH: How did this approach differ from previous capture methods? How did it enhance the creation of the final digital suit?

RR & TB: Our approach was different from standard methods, which usually separate photogrammetry from photometry, especially when a light stage is not used. We replaced the traditional photometry role with machine learning and various lighting setups. This made the process more efficient, eliminating the need for a lot of photometric data. Our method improved the final digital suit. The model we created was not only geometrically accurate but also truly reflected the suit’s materials and appearance.

SIGGRAPH: You are bringing history to life with this work. How will this innovation help us better understand the past? Or maybe even the future?

RR: This project brings a historical artifact to life in digital form, offering a new way to understand the past. We can now closely examine and study the suit without risking damage to the actual object. Looking ahead, this method could be a meaningful change in digitally preserving other historical artifacts, making them available for education, research, and preservation, regardless of location.

TB: The final delivery format is also very versatile and can be used to integrate the digital twin in a number of experiences easily, from VR/AR to high-end film shots, through Web3D.

JB: The Smithsonian, at its core, is an educational institution. We can leverage this compelling 3D model to tell the story of the suit and by extension the context of the early NASA program and the engineering innovations driven by the race to space. The model is also a highly detailed snapshot of the suit in a moment in time, this is an amazing asset to be able to provide those doing research and conservation related to the Smithsonian’s space suit collection.

SIGGRAPH: This time it was a space suit. What is the next step for leveraging this new technology?

RR: It is important to note that this is more about a new workflow than just technology. This approach is ideal for items that are fragile, unique, or have complex details that are difficult to capture traditionally. Museums and historical groups will find this particularly useful for preserving old or delicate items. We plan to apply this workflow to a broader range of artifacts. This way, more people can learn about these items digitally, even if they cannot see them in person. It is also a terrific way to maintain a digital archive for the future. This workflow could significantly change how we document and learn about historical items.

JB: As there is not “one size fits all” solution to 3D digitization, our team is always looking to develop new digitization approaches to document the Smithsonian’s collection. The workflow developed with Adobe expands the scope of objects at the Smithsonian that can be candidates for upcoming digitization projects.

Stay in-the-know on all things Talks! With submissions opening soon and registration underway, join our mailing list to receive SIGGRAPHITTI newsletter and other informational conference emails today.


Jon Blundell is a 3D Program Officer at the Smithsonian’s Digitization Program Office. A Maryland native, Jon Blundell is currently living out the assumption he made at the age of 6, that he would either be working at the Smithsonian, or become an astronaut. He chose the shorter commute. Coming from the world of the preservation trades, Jon found himself at the DPO in 2012 were he focuses on the technical challenges of the department; developing workflows and IT infrastructure to support 3D capture, processing, data management, and delivery to the public. When he’s not uploading the Smithsonian’s collection to the Matrix, he can be found playing pinball and tabletop games.

Tamy Boubekeur is a Senior Principal Research Scientist and Senior Director at Adobe Research, leading the Paris Lab. He is also Professor at Ecole Polytechnique, Institut Polytechnique de Paris and he is currently on leave from his (main) professorship in Computer Science at Telecom Paris, Institut Polytechnique de Paris,  where he founded the computer graphics group in 2008. He was previously director of 3DI Research&Labs at Adobe, Chief Scientist at Allegorithmic (acquired by Adobe in 2019), research associate at TU Berlin (Germany), and research team member at INRIA (France) and the University of British Columbia (Canada). He received a M.Sc in Computer Science (2004) and a Ph.D in Computer Science (2007) from the University of Bordeaux as well as a HDR (“Habilitation à Diriger des Recherches”) in Computer Science from University Paris XI (2012). His research areas focus on 3D Computer Graphics, with a special interest in modeling, rendering and learning efficiently 3D data, which includes shape analysis (from spatial to statistical methods, with application to recognition, interactive modeling and rendering), geometry capture, processing, editing, real-time image synthesis, global illumination, GPU programming and graphics data structures. Over the last few years, his team has transferred a number of its technologies to the Adobe Substance 3D products, including major rendering, modeling and AI features.

Romain Rouffet is a Creative Technologist working on materials and objects capture. As a Creative Technologist, his goal is to be a bridge between Adobe Research and product teams by proposing creative, efficient, and achievable solutions to be implemented in products. As an artist, he showcases R&D in creative ways. Romain received his engineer’s degree in Mechanical Engineering and previously worked on digitization of endangered cultural heritage sites in 3D. Now as part of Adobe Research, he focuses his work on dataset creation and the capture and editing of digital materials. When he’s not thinking about 3D scanning, Romain keeps himself busy with Landscape photography / Mountain trekking.

Related Posts