Transforming Shadows, Transforming Self

by | 16 October 2019 | Art, Conferences, Design, Graphics, Interactive Techniques

Fragment Shadow © 2019 Sony Computer Science Laboratories, Inc.,
The University of Tokyo, Backspace Productions Inc., and Koozyt, Inc.

The SIGGRAPH 2019 Studio installation Fragment Shadow offered participants an opportunity to see their real optical shadows fragmented, textured, and immersed through projection graphics. Catch a video preview of the installation below then read on for our exclusive interview with creator Shunichi Kasahara of Sony CSL and The University of Tokyo.

SIGGRAPH: The Fragment Shadow system consists of multiple projectors that project onto the same surface. What was the inspiration for the development of this system?

Shunichi Kasahara (SK): My main research interest is to investigate how humans understand ourselves through the perception of our bodies and actions, and especially how technology can enhance and transform the perception of self. In this context, a “shadow” is really interesting because it is the simplest existence which reflects the body. The initial idea was that I wanted to transform the shadow itself, not a pseudo-shadow captured by cameras or a depth sensor. When I was working on another idea with projectors, I found that multiple projectors could generate the unique transformation of shadows.

Next, our team started to create the system around this idea. From a technical point of view, we needed to solve four challenges: geometry calibration, color space calibration, color uniformity calibration, and frame synchronization with multiple projectors. To make a long story short, we tackled those challenges together and eventually achieved the exhibition-ready system participants saw at SIGGRAPH 2019.

SIGGRAPH: What else did you find exciting about the idea of using shadows as the basis for the project?

SK: The most exciting aspect of this idea was that a real shadow has infinite resolution (because it’s real) and no latency (because it’s real), but our system can still control the transformation. After building a basic system for calibration with multiple projectors, we explored what kind of transformation or fragmentation we could create. For instance, we can create the fragmentation of a shadow shape by controlling which projector contributes to each position on the screen. Another technique is creating textured shadow by calculating physical color on the projection screen with multiple projectors — this is like hiding an image by generating “inverse” texture. We also found the project to be a really interesting visual expression in the shadow’s graphic look by assigning multiple projectors for separate layers.

SIGGRAPH: In the SIGGRAPH 2019 Studio (and at SXSW), participants were able to interact and experiment with the tool. What has been the best part of demoing Fragment Shadow at SIGGRAPH? At SXSW?

SK: The most epic moment since first exhibiting Fragment Shadow has been seeing participants be surprised and even start to move their bodies. Since the (fragmented) shadow completely reflects own self, even it’s transformed, many people enjoy the changes of the “self” shadow. In particular, at SIGGRAPH, it was also interesting to observe how participants tried to understand the visual changes in Fragment Shadow. The SIGGRAPH participants sometimes even started to discuss when playing around, exclaiming, “Hmm… it is just camera, a projector system we already know… oh, wait NO, NO… this is not that. It is a real shadow changing. What is going on…?” So, we were happy that we could amuse participants from a technical point of view.

SIGGRAPH: In the technical description of your project, you mention that “the occlusion of one projector reveals images from other projectors.” Can you tell our readers a bit more about the technology involved?

SK: For instance, when one projector outputs red color, and another projector outputs cyan color into the same projector screen, we can see (almost) white color on the screen. However, once we enter the projection area, our shadows look red and cyan. This is well-known phenomena called “color shadow,” but I found that we can make a more complicated version of this and it enabled the transformation of real shadow.

In Fragment Shadow, each projector outputs light from different directions and the casted shadows will appear in different positions on the screen. Once all calibration pipelines are complete, we can control physical color on the screen with multiple projectors. For instance, we can synthesize a single image on the screen from two different images and when we occlude one of them, the other image becomes “perceptible” as a shadow. This makes our shadow textured, even with motion texture.

SIGGRAPH: Have you attended a SIGGRAPH conference before?

SK: This was my fourth SIGGRAPH and it always leads to great inspiration for me. In the 2016 VR Village [what is now known as the Immersive Pavilion] the YCAM team and I showcased Parallel Eyes, a four-person, view-sharing game. I have also demonstrated projects in the Emerging Technologies venue: the first with Jun Nishida, Wired Muscle, in 2017, and again in 2018 debuting a wearable projector, HeadLight. This year, I actually had the chance to show two projects: the one we’re discussing today for the Studio and Preemptive Action, an EMS accelerates our action with sense of agency that I worked on with Jun Nishida and Pedro Lopes, for Emerging Technologies.

Exhibiting work at SIGGRAPH offers me a chance to observe so many participants experiencing my work, which provides a lot of insight into how people perceive or feel when interacting with a system. For instance, in 2017 when we showed Wired Muscle, we found that the reaction of participants was so interesting that it led to our next project, Preemptive Action.

Also, I would sincerely like to thank the technicians who help us exhibit our projects. They are always really supportive and help me to challenge new trials. I look forward to having a chance to show a new project at the next SIGGRAPH conference!


Shunichi Kasahara is an associate researcher at Sony Computer Science Laboratories, Inc. and project assistant professor at the Research Center for Advanced Science and Technology at The University of Tokyo. He received a Ph.D. in interdisciplinary information studies from The University of Tokyo in 2017, and joined Sony Corporation in 2008. Since then, he went on to hold a position as an affiliate researcher at MIT Media Lab in 2012, eventually joining Sony CSL in 2014 where he focuses on leading “superception” research on computational control and the extension of human perception.

Related Posts