Tapping Into Innovation

by | 1 August 2025 | Conferences, Mobile

Image Credit: Nhan Tran

SIGGRAPH 2025’s Appy Hour highlights some of the most inventive, forward-thinking apps in development today. We spoke with three of this year’s contributors whose work reimagines how we interact with technology — through personal tracking, playful exploration, and creative collaboration. Each app offers a distinct take on how digital tools can be used not just to engage, but to empower.

SIGGRAPH: What inspired the creation of your app, and how do you see it impacting the community or industry it was designed for?

Nhan Tran (NT), creator of “MeCapture: Capturing and Visualizing Long-Term Body Changes with Mobile AR“: Our app is designed to help users capture what we call a personal time lapse — a high-quality time lapse of a human body part, such as a hand, face, or foot — used to visualize healing and growth over long periods of time.

A lot of the inspiration came from the COVID-19 lockdowns. During the pandemic, remote healthcare became crucial, and many patients resorted to sending pictures taken with their phones to doctors in lieu of in-person visits. We wanted to improve that process by helping users eliminate the ambiguities that can make a photo hard to interpret — such as variations in camera viewpoint, lighting, or body pose — all of which can significantly impact how something like a wound appears in a photograph.

“MeCapture” helps users control these variables outside clinical settings, allowing them to capture informative data from home, work, or anywhere else. It lets users capture and visualize slow, long-term changes in their bodies through personal time lapse, as introduced in our UIST 2024 research article.

Beyond medicine, we see potential for this technology in other domains, such as tracking plant growth in field research or monitoring structural changes in buildings. The app and sample 3D personal time lapses are available for free at MeCapture.com. Time lapse has a unique power: It reveals patterns and progress that are invisible in the moment but deeply meaningful over time. This work builds on a broader line of research in our group exploring how everyday devices can document gradual change. “MeCapture” focuses on the human body, but we’ve also developed tools for capturing other types of long-term change, including another paper at SIGGRAPH 2025 titled “Pocket Time Lapse,” which uses a mobile phone to capture time lapse of outdoor scenes like construction sites and changing foliage.

Image credit: “ScavengeAR” © 2025 Victor Leung

Victor Leung (VL), creator of “ScavengeAR“: “ScavengeAR” originated from a conversation with the SIGGRAPH 2017 VR Village Chair at SIGGRAPH 2016 in Anaheim, where she expressed interest in a mobile conference app for the following year. Inspired by the recent success of “Pokémon Go” and fueled by a love for photography games like “Pokémon Snap!”, a team of SIGGRAPH volunteers and students from the Luddy School of Informatics, Computing, and Engineering at Indiana University Indianapolis developed an augmented reality scavenger hunt for the conference.

Running from 2017 to 2019, “ScavengeAR” attracted thousands of players, successfully educating them on augmented reality and enhancing their conference experience. While development paused in 2020 due to the pandemic, we aim to reignite the app’s potential and recapture that positive impact.

Image credit: YOSUN CHANG

Yosun Chang (YC), creator of “AI3D Co-Create with AI3D Render“: In a time when humans are ceding creative control to AI for “good enough” or “close enough,” what if we could create a positive feedback loop between multimodal AI and humans to easily and expressively create what we actually want? Since antiquity, mastery often required the Gladwell 10,000-hour investment in skill. But what if we used modern technology to create what the ancients could only dream of — the perfect rendition of the actual idea?

SIGGRAPH: What was one of the biggest technical or creative challenges you faced in developing the app, and how did you overcome it?

NH: To help users consistently capture medically useful photos over time, we identified three key variables to control: camera viewpoint, body pose, and lighting conditions.

For viewpoint, we found that existing on-device trackers often fail when applied to human subjects — which bend and move in complex ways — or when the background changes between captures. We developed a custom 3D tracker that estimates the relative pose between the current and previously captured body geometry. The reference observation — which captures both the 3D shape (depth) and appearance (RGB) — can be created by the user or by a clinician during an initial visit. It defines the viewpoint and body pose to match in future recaptures. During capture, the app uses this real-time pose estimation to drive AR guidance, such as rings and crosshairs on-screen, to help users align with the reference viewpoint.

For body pose, the app overlays the reference image with a pixel-level color blend: Red if the body part is too close, blue if it’s too far, and green when aligned correctly. This real-time feedback helps users match the previous pose.

Lighting was especially tricky due to variable ambient conditions. Since people take photos in different environments — apartments, buses, offices — we couldn’t control lighting. The only light source we could reliably use was the phone’s flash. The app captures two photos in quick succession — one with flash and one without — then subtracts the background lighting from the second photo. Using only the flash keeps lighting consistent throughout the time lapse.

These solutions combine techniques from computer graphics, computer vision, and human–computer interaction to make high-quality, repeatable data capture possible for everyday users.

VL: The biggest challenge was transforming a patchwork of third-party services and deprecated systems into a low-cost, maintainable solution. Since the app’s original release, AR technology and design conventions had evolved significantly, and many of the tools we once relied on were no longer supported. This required us to rethink the architecture from the ground up — replacing legacy components and rebuilding both the AR interface and backend infrastructure.

Fortunately, the AR ecosystem has become more standardized, and the rise of open-source tools made it easier to adopt sustainable, future-proof solutions. While open source often lacks the dedicated support of enterprise tools, the community has been incredibly responsive and resourceful. When implemented thoughtfully, these solutions proved just as capable.

YC: Initially, I overcomplicated the app as “AI3D Sculpt”, sneakily demoing it with live audience participation at last SIGGRAPH’s AI for 3D Birds of a Feather, ECCV Demos 2024, and the CODAME ART+TECH Festival Milan. After a suggestion from @Corgi.CAM and Michael Gold, I simplified it as “AI3D Primitives.” Galvanized by the Google Gemini Million Dollar Hackathon, I wondered: What if you could keep creating and never quite hit the two-million-token context cap — to co-create with an AI assistant that just “knows you”?

Since 2025 is considered the year of generative AI video, we added “AI3D Render Mode”, powered by Netflix/Eyeline’s “Go with the Flow.” It’s the first all-inclusive platform allowing 3D modeling and keyframe-based control of AI video.

SIGGRAPH: SIGGRAPH 2025 is fascinated by the human story at the center of the technology and advancements created by our community. How does your app contribute to that story?

NH: The human body is always changing, but that change is often so gradual we barely notice. “MeCapture” helps reveal these subtle, long-term transformations. Whether tracking post-surgical recovery or observing a chronic condition, these visualizations are deeply tied to a person’s health journey.

Since launching, the app has been downloaded in 14 countries, and we’ve received encouraging feedback. While “MeCapture” is not yet clinically tested, our research shows it leads to more consistent and precise data — a step toward improving remote healthcare. We’ve received interest from medical researchers and are exploring collaborations, including features that help doctors guide patients in capturing better data. We hope to keep learning from users and clinicians to make this technology more useful in real-world care.

VL: “ScavengeAR” uses augmented reality not just as a visual novelty, but as a catalyst for real human connection. I watched strangers meet at physical markers — drawn together by the goal of discovering digital creatures. What started as individual curiosity evolved into collaborative exploration. People formed impromptu groups, helped each other navigate the experience, laughed together, and sparked conversations that continued well beyond the app.

YC: As a very non-standard human with an unlikely background — I’ve been published at SIGGRAPH for a decade despite being a triple-major dropout with zero degrees — I’ve built revolutionary apps that won the TechCrunch Disrupt Grand Prize twice, all while working solo. Many assume I have a large team or budget, but it’s just me, building breakthrough HCI x AI apps in the wild with my corgi as co-pilot.

When I see inefficient software, I dream up 10,000 improvements and start building. I’ve mastered broken tools — earning Autodesk 3ds Max certification — and even considered becoming a physicist (I aced the GRE Physics exam). I dive deep into the common language of our time to create more intuitive human–AI interfaces. The “AI3D Foundation” is just me + corgi + the world as our lab. We’re 100% funded by winning hackathons and maintain full autonomy to openly present our research.

These apps go beyond technology. They open up new ways to see, create, and understand the world around us. At SIGGRAPH 2025’s Appy Hour, these creators use innovation and storytelling to invite you to explore, engage, and imagine what comes next. Register now for SIGGRAPH 2025 and be part of the experience.


Nhan (Nathan) Tran is a PhD candidate at Cornell University, working with Professor Abe Davis. Alongside his computer science studies, he is pursuing a minor in Film and Video Production. His current research interests include AR/VR technology, human-computer interaction, and interactive interfaces for content creation.

Victor Leung is a Software Engineer Prototyper, Lighting/Rendering Technical Artist, and Pipeline Engineer with 10+ years experience building 2D/3D pipelines, prototyping Minimal Viable Products, and evangelizing for Media & Entertainment as well as Research & Development. Made technical and design contributions on emerging software/hardware platforms utilizing Data Capture, Quality Control, 3D Reconstruction, AR/VR/MR, Synthetic Data, and AI.

Yosun Chang is an AI and AR/XR industry veteran and visionary hacker-entrepreneur who has conceived and built MVP software that defined AR e-commerce and realtime mobile character animation, winning 2 TechCrunch Disrupt Grand Prizes, and hundreds more; her work has also been exhibited at Ars Electronica, SXSW, Makerfaire, Tech Museum of Innovation, etc. Her app that turns what kids write on paper into 3D AR animated scenes, DrawmaticAR, became the first solo contribution to win SIGGRAPH Real-Time Live!. A current focus is on creating “AI3D” software that utilizes novel HCI to help humans create 3D intuitively and expressively.

Related Posts