“Becoming Homeless: A Human Experience” © 2018 Stanford University
Virtual reality (VR) makes escaping into fantasy worlds possible, but can it also generate urgency for engaging with the real world?
“Becoming Homeless: A Human Experience,” shown in SIGGRAPH 2018’s Virtual, Augmented and Mixed Reality program, is an interactive VR experience that puts users in the shoes of someone experiencing an eviction. The user, within the narrative of the experience, is forced to select items in their apartment to sell in order to pay a rent fee that they just can’t make. When the user is ultimately forced from their home, they find themselves on a bus overnight, trying to stay safe in a situation they never expected. Press play below for a walk-through of the experience.
We spoke with Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Lab and one of the passionate creators behind “Becoming Homeless.” Bailenson studies the psychology of virtual and augmented realities, in particular, how virtual experiences lead to changes in perceptions of self and others. “Becoming Homeless,” he believes, can help to bring people closer to those who often go ignored.
SIGGRAPH: How might “Becoming Homeless” challenge the notion that VR is exclusively a method by which to escape the real world?
Jeremy Bailenson (JB): I think, in general, the VR simulations that “stick” tend to be ones that are solving a real-world problem. Whether it is training, empathy, or communication, it takes a reason to want to put on the headset.
SIGGRAPH: What were some of the VR inspirations for “Becoming Homeless”?
JB: A documentary called “Hotel 22” was the main driver for the bus scene in “Becoming Homeless.” [Seen at 3:48 in the video above.] The final two scenes in the video shared show how we utilized their narrative. For example, in the third scene, a user experiences what it is like to try and sleep on a bus and navigate the tension between trying to protect yourself and your belongings, while hoping to grab an hour of rest.
SIGGRAPH: Tell us a bit about the creation process for “Becoming Homeless” from a technology standpoint. What excited you about what you shared at SIGGRAPH 2018?
JB: It was critical that we get the story right. Before beginning any 3D modeling or coding, we spent over a year on the storyboard alone. Elise Ogle worked tirelessly to interview people who had lost their homes, nonprofits created to help the homeless, and filmmakers who had made traditional, 2D documentaries on the topic. Then, we would imagine which narratives would work experientially in VR, using the 15 years of research from our lab to guide the design principles. It was critical that the piece produce at least one “aha” moment for viewers that was intense and transformational.
Technologically, what is exciting about this experience is its robustness and scale. We have now run thousands of experimental participants through the experience — in museums, schools, festivals, and other locations. When we first started designing “Becoming Homeless,” we needed a high-end desktop and a six-figure tracking system to enable the proper body movements. Now, the whole thing runs on a laptop, and the hardware to do tracking costs only a few hundred dollars. We have also innovated a way to output tracking and behavioral data onto the cloud, and in our new studies we are approaching a “Mechanical Turk of VR.” We have adapted it from 18 degrees of freedom — full head and hand tracking — to a more limited version that runs with six, by just tracking the head position and using gaze as an input. What we are most proud of is the robustness of the system — it runs consistently, regardless of location or computer.
SIGGRAPH: What challenges did you encounter during the creation process?
JB: From a narrative standpoint, it was critical that we not reinforce stereotypes about the homeless. To this end, we iterated with many experts on the topic, altering the storyboard early and often after intense deliberations. From a technological standpoint, the classic, “How do you direct the user’s attention in a 360-degree scene?” was guiding us at every stage. While we, of course, couldn’t completely solve this issue, our workarounds involved the use of spatialized sound to guide attention and “fail-safes” to force an action to occur if the person didn’t interact properly. For example, we have a virtual human react in a particular way when you look at him — that is, put his head in a relatively central position in your field of view. But for those who didn’t look in that direction, we created a separate narrative thread that would then engage based on a timer.
SIGGRAPH: Making an experience interactive, of course, hikes up not only the time required to bring it to life, but also the cost. Why was it crucial to make “Becoming Homeless” interactive?
JB: In screenplay writing, the mantra is “show don’t tell.” In other words, you should use action among characters as opposed to just dialogue. In VR, we say “do don’t show.” It’s all about experience. Our research shows that body movements are critical to the efficacy of VR experiences. So anything one can do to increase movement increases the effectiveness of the piece.
In VR, we say “do don’t show.” It’s all about experience.
SIGGRAPH: In Against Empathy: The Case for Rational Compassion, author Paul Bloom writes, “When empathy makes us feel pain, the reaction is often a desire to escape.” Is there evidence that pro-social VR experiences like “Becoming Homeless” can actually lead people to take meaningful, persistent action?
JB: Yes, there is a growing body of evidence on the efficacy of VR for empathy. Fernanda Herrera just published a large-scale, longitudinal set of studies. People who went through “Becoming Homeless” in immersive VR were more likely to sign a petition supporting affordable housing compared to control conditions. Moreover, effectiveness of VR outpaced controls even when looking two months after the experience.
SIGGRAPH: Where do you envision “Becoming Homeless” and other similar pro-social VR experiences existing in the world?
JB: The simulation is being installed at schools, libraries, museums, and video arcades. For the next year or two, VR will thrive in “location-based” settings where the hardware and software are maintained, as opposed to living rooms around the world. Also, anyone who has basic VR hardware can download a free executable file from our lab’s website.
SIGGRAPH: Since showcasing “Becoming Homeless” at SIGGRAPH 2018, what have you been working on?
JB: We have been working a lot on Climate Change. We just found out a piece we premiered at the Tribeca Film Festival in 2016, the “Stanford Ocean Acidification Experience,” has been installed in over 102 countries — half the planet! We will continue to focus on empathy and climate change [in our VR work].
The Stanford Ocean Acidification Experience was downloaded in a majority of countries on Earth (102). Thanks to @Steam @Viveport and @Oculus for supporting experiential learning at scale. Thanks to @MooreFound for thinking outside the box in 2013. #VRworks https://t.co/u4o7n6K71O pic.twitter.com/cau6WisG6F— Stanford VR (@StanfordVR) February 26, 2019
Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. Bailenson studies the psychology of Virtual and Augmented Reality, in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how virtual experiences can transform education, environmental conservation, empathy, and health. He is the recipient of the Dean’s Award for Distinguished Teaching at Stanford. He has published more than 100 academic papers, and his work has been continuously funded by the National Science Foundation for 15 years.