Large-Scale Photogrammetry for Cultural Heritage at St. John’s Co-Cathedral

by | 13 May 2026 | Conferences

Image credit: Stargate Studios Malta and Permission granted.

Ahead of their SIGGRAPH 2026 Art Papers presentation, Dylan Seychell, Matthew Kenely, Mark Bugeja, Andre Grima, Peter Pullicino, and Matthew Pullicino offer a deep-dive look at their effort to digitally document the tapestries in St. John’s Co-Cathedral — one of Malta’s most spectacular cultural landmarks. Blending large-scale photogrammetry, AI-assisted processing, and production-tested workflows, the team captures both the technical complexity and the visual splendor of this extraordinary site. Read on for a preview of the ideas and research they’ll share at SIGGRAPH 2026 and how computer graphics can help preserve, share, and reimagine cultural heritage for global audiences.

SIGGRAPH: What inspired your team to take on the large-scale digital documentation of St. John’s Co-Cathedral? How did the cultural and historical significance of the site shape your technical and artistic priorities?

Contributors: Our primary inspiration was a rare and historic window of opportunity. Following a 16-year restoration in Belgium, the full set of 29 Flemish tapestries was returned to the cathedral for a final, breathtaking exhibition titled “A Gift of Glory.” This was likely the last time these 17th-century masterpieces, based on designs by Peter Paul Rubens, would ever be hung together in the nave before moving to their permanent climate-controlled resting place in the new cathedral museum. We felt a big sense of responsibility to capture this specific moment in time where the cathedral appeared exactly as the grandmasters intended centuries ago.

The cultural significance of this event dictated our technical priorities. Because the tapestries were only on display for a limited period, our data collection had to be very efficient. We dedicated two-and-a-half of our seven nights exclusively to the tapestries and the chapels, using high-resolution 8K photography to capture the intricate weaving and silver threads that are now part of our digital archive. Artistically, our priority was to ensure that future generations could virtually “visit” this specific exhibition, preserving the relationship between the monumental tapestries and the gilded Baroque architecture they were designed to complement.

SIGGRAPH: Baroque interiors are challenging to document due to reflective surfaces, dark materials, and dense ornamentation. What were some unexpected obstacles you encountered during data capture, and how did you adapt your workflow in response and overcome them?

Contributors: The most persistent obstacle was the “specular confusion” caused by the cathedral’s extensive gold leaf and bronze. These reflective surfaces create highlights that shift as the camera moves, which can mislead photogrammetric algorithms into seeing false geometric changes. To overcome this, we reduced image contrast during post-processing to minimize the baked-in highlights and shadows.

Unexpectedly, we also had to manage physical movement within the site. The hanging tapestries are subject to constant micro-movements, which initially compromised reconstruction quality. We adapted by developing a plan to reproject high-resolution flat photographs of the tapestries, captured while they were dismantled, onto smoothed digital meshes. Furthermore, the extremely low light required us to prioritize sharp focus over low ISO settings. We chose to accept digital grain as a trade-off for alignment accuracy, later mitigating that noise through a month-long, AI-assisted denoising process.

SIGGRAPH: Your pipeline combines automated processes with manual intervention. Describe a moment where human judgment became essential to work in tandem with the AI-assisted parts of the process.

Contributors: Human judgment was critical during the LIDAR cleanup and registration phase. While the LIDAR software can automatically combine scans, it cannot distinguish between permanent historical architecture and temporary modern obstructions like scaffolding, tourist signage, or chairs. Our team had to manually identify and clean these objects from each of the 43, 60-minute scans to prevent them from becoming permanent artifacts in the final 3D model.

Manual intervention was again required when automated alignment broke down due to the intricate complexity of the site. Because AI denoising can occasionally erode fine details critical for feature detection, we placed over 100 control points by hand to bridge the disconnected regions. These control points acted as anchors that enabled the software to successfully align 98.6% of the 99,000 images into a single unified component.

SIGGRAPH: You present early experiments with Gaussian splatting alongside traditional mesh-based reconstruction. What excites you about this emerging representation for cultural heritage, and where do you see it fitting into future documentation workflows?

Contributors: What excites us most about Gaussian splatting is its ability to preserve view-dependent effects (the natural way light glints off a gilded surface or reflects on marble) which traditional photogrammetry often struggles to replicate. In our experiments with the Chapel of Germany, Gaussian splatting provided a physically accurate appearance that felt much more “alive” than a standard textured mesh.

We see this as a dual-path future for documentation workflows. Traditional mesh-based reconstructions will remain the standard for structural analysis and conservation because they capture the raw geometric truth of the building. However, Gaussian splatting will likely become the primary tool for high-fidelity visualization, virtual tourism, and interactive experiences where the goal is to evoke the emotional, atmospheric experience of being inside the space.

SIGGRAPH: You will be presenting this work as part of the SIGGRAPH 2026 Art Papers program. What do you hope the SIGGRAPH community takes away from your research and presentation?

Contributors: We hope the SIGGRAPH community recognizes that large-scale heritage documentation is as much about logistical strategy and human intervention as it is about advanced algorithms. In this paper, we aim to connect the clean lines of photogrammetric theory with the messy, day-to-day realities of working at a busy, world-famous living heritage site.

Ultimately, we want to show that this “Digital DNA” (a model made up of 25 to 30 billion triangles) is far more than just a digital archive. It’s a rich, flexible foundation that lets people touch heritage through 3D prints, explore it online from anywhere in the world, and experience it immersively in VR. We hope to inspire other researchers to develop tools that further automate the chunking and optimization of these massive datasets, making this level of preservation accessible to more sites worldwide.

Don’t miss the fascinating presentation of “Large-Scale Photogrammetric Documentation of St. John’s Co-Cathedral: A Workflow for Cultural Heritage Preservation” during the SIGGRAPH 2026 Art Papers program on Wednesday, 22 July. Register now to save your spot in Los Angeles and access more emerging content on art, computer vision, image processing, modeling, and more.  


Dylan Seychell is a Lecturer in the Department of Artificial Intelligence at the Faculty of ICT, University of Malta, where he holds a PhD in Computer Vision. He has published extensively on AI and computer vision across international peer-reviewed conferences, journals, and books, with work recognised through awards including the European Satellite Navigation Competition and CeBit. He leads the Dawl AI Lab and serves as Principal Investigator across a portfolio of funded research projects applying AI to environmental sensing, media literacy, and spatial intelligence. He also leads Malta’s National AI Literacy Programme in partnership with the Malta Digital Innovation Authority, for which he serves as a certified technical expert. His research spans computer vision, AI-assisted 3D reconstruction, and cultural heritage digitisation. He is committed to bridging academic research and real-world deployment, and this work on St. John’s Co-Cathedral reflects his broader interest in deploying AI and computer vision at heritage scale.

Matthew Kenely graduated with a B.Sc. IT (Hons) in Artificial Intelligence from the University of Malta in 2023, followed by an M.Sc. in Artificial Intelligence in 2026, both with Distinction. His research interests lie in computer vision and image processing, with his postgraduate research focusing on neuroimaging, applying Siamese networks to identify monozygotic twins from brain MRI scans through neuroanatomical differences extracted through embeddings. In 2024, Matthew joined SeyTravel Ltd as an AI Engineer, contributing to the development of the company’s SaaS platforms and research on the application of AI. Recently, his research has expanded into cultural heritage, with peer-reviewed published work spanning conversational AI for cultural and sacred contexts, visual saliency prediction, and automatic annotation for computer vision training.

Mark Bugeja received the B.Sc. degree in creative computing from the University of London, London, U.K., in 2012, and the M.Sc. and Ph.D. degrees in artificial intelligence from the University of Malta, Msida, Malta, in 2017 and 2025, respectively. His doctoral research focused on computer vision with application in intelligent transport systems.He is a Lecturer in Digital Tourism with the Department of Tourism Management, University of Malta. From 2020 to 2025, he was Head of Research and Development at the Institute of Tourism Studies, Malta, where he was responsible for the system design of Malta’s Skills Pass Platform. From 2017 to 2020, he was a Research Support Officer with the Department of Artificial Intelligence and the Institute of Climate Change and Sustainable Development, University of Malta. He is an AI and computer vision specialist with applied research in tourism and transport. His research interests include computer vision, reinforcement learning, intelligent transport systems, and the application of machine learning to digital tourism and tourist behaviour.

Andre Grima graduated from MA Animation Production at AUB in 2015. During his studies, he developed a strong interest in VFX and following graduation, he returned to Malta and joined Stargate Studios as a CG Artist. He contributed to various television dramas, including Medici: Masters of Florence. Over the years, Andre took on increasing responsibilities, progressing to CG Lead and then CG Supervisor. In these roles, he played a role in the growth of the CG team while contributing to high-profile projects including Leonardo, Threadstone and Devils. In recent years, Andre has shifted his focus towards research and development within the studio, specialising in areas like digital scanning, photogrammetry, and Unreal Engine. He has been involved in developing new technologies and pipelines while simultaneously working on high-end productions including 1923, Gladiator II and Jurassic World.

Peter Pullicino is a camera and production specialist at Stargate Studios Malta. His background is grounded in cinematography spanning camera assisting, focus pulling, and directing photography and that technical foundation informs everything he does. Peter’s work centres on building the bridge between on-set production and post-production: designing pipelines that serve both disciplines, bringing the creative intelligence of post onto set, and the rigour of production into the effects pipeline. He develops AI-augmented workflows and builds custom tools where the work demands it. 

Matthew Pullicino is Managing Director of Stargate Studios Malta. He has worked in the industry for 25 years on over 80 productions, including Black Mirror, One Piece, Medici, and Paul, Apostle of Christ. He has served as VFX supervisor on productions as well as Executive Producer for major titles. His experience across narrative and documentary work informs his approach to projects with substantive subject matter. Matthew is also involved in storytelling through parallel disciplines such as interactive and immersive platforms. Matthew has also invested in adopting AI workflows to evolve the digital studio and its capabilities. 

Related Posts