Interacting With AI ‘Inside the Classification Cube’

by | 5 May 2020 | Art, Conferences, Research

Image by David Pace

“Inside the Classification Cube: An Intimate Interaction With an AI System” is a SIGGRAPH 2020 Art Papers selection that explores one of society’s most frequently asked questions — how do people portray themselves versus how others portray them? Here, we catch up with new media artist Avital Meshi to learn how the “Classification Cube” installation invites people to interact with an AI classification system and experience in real time how they are viewed through its lens.

SIGGRAPH: Share a bit about the process for developing your research, “Inside the Classification Cube: An Intimate Interaction With an AI System.” What inspired you to pursue this concept?

Avital Meshi (AM): The “Classification Cube” art installation was developed as my MFA thesis project for the Digital Arts and New Media program at University of California, Santa Cruz (UC Santa Cruz). Throughout my studies, I mainly focused on the role of avatars and possibilities for embodiment in virtual worlds, along with the performative engagement that comes with this practice. I started thinking deeply about AI systems while attending a course taught by professor Angus Forbes, which included a focus on applying deep learning for creative purposes. The class enabled me to better understand the nature of these advanced systems and think critically about their transformative power.

Our class discussions, along with my previous exploration of avatars, led me to examine the way bodies and identities are represented in AI systems. One of the things that struck me the most was that AI classification systems, which are becoming so prevalent in our environment, are designed to estimate information based on our appearance; however, since we do not have access to this information, we never know how we are interpreted by these systems. This realization inspired the creation of “Classification Cube,” which is an installation that invites people to interact with an AI classification system and see for themselves, in real time, how they are being seen through its lens.

SIGGRAPH: How did you develop the research? How many people were involved? How long did it take?

AM: The development of the installation took about four months. During this time we consulted with many artists and scholars from UC Santa Cruz, including professors Katherine Isbister, Marianne Weems, Edward Shanken, and Micha Cárdenas, among others. Their brilliant insights and generous feedback informed my thinking and assisted me in shaping the installation. In this process, we considered many aspects of human-computer interaction and the impact of design choices on social and emotional connections. We discussed how the networked body is being portrayed and represented within algorithms and how lacking and superficial this representation can be. We examined ideas of performative engagements and considered ways to encourage viewers to perform different behaviors to the system while still feeling comfortable and safe to do so. Lastly, we surveyed previous interactive AI installations and aimed to situate the piece among other artworks which explore AI in a critical way.

In building the AI system itself, I received guidance from my spouse, Ofer Meshi, who is an AI scientist. Ofer helped me to find suitable classification models and combine them into a single interactive system. The conversations I had with him while building the system were eye opening and highlighted the need for collaboration across disciplines in the field of AI.

SIGGRAPH: What was the most challenging aspect of developing the AI interaction?

AM: The most challenging aspect of developing the AI interaction was making sure that viewers were comfortable enough to interact with the system. These days, there is a sense of panic around AI recognition and classification systems, their invasion of our privacy, and their susceptibility to bias. As much as these notions appear to be justified, we did not want them to prevent people from interacting with the AI system inside the “Classification Cube.” Our goal was to allow viewers to spend time inside the installation, interact with the AI system, and see for themselves how it sees them and how they can change their visibility by engaging in a performative behavior.

In order to achieve this, we needed to make sure that people felt safe and comfortable inside the installation. The first, and maybe the most important, way of making sure of that was to design a system that does not collect the data of people it interacts with. Although it could have been insightful to look at such data, it was important for us that the system is not perceived as a surveillance system. Additionally, we designed the interaction to be dynamic and to make it feel like a “conversation” between a human and a machine; outputs were produced in real time and kept refreshing and reappearing every couple of seconds. This allowed viewers not to linger on a specific classification, but rather to focus on the interaction itself and the way their movement influences their visibility through the system. Finally, the design choices for the cube’s structure made the space feel private and immersive. For instance, we shaped the entrance to the cube to invite people in but also to signal others not to enter if someone else is inside.

SIGGRAPH: How does “Inside the Classification Cube” classify participants by demographic?

AM: The algorithms we used in the “Classification Cube” installation are “off-the-shelf,” pre-trained models derived from computer-vision algorithms developed by the open source community and are available on GitHub code repositories. These include a face detector, an age and gender estimation model, an emotion recognition model, and an action recognition model. The outputs displayed on the screen inside the installation are the labels which have been estimated with the highest confidence level. Unfortunately, the models we found only offer a reductionist view on identity complexities. For instance, the emotion classifier can only estimate 1 out of 7 emotion categories, the gender classifier only offers the binary gender categories. Even the action classifier, which offers 600 different behaviors, is still very limited when compared with the complex array of human behavior. These simplifications were mentioned again and again by viewers who interacted with the system and kept claiming that the system classified them the “wrong” way. Whenever we heard such comments we felt as if our goals were met, mainly because we really wanted people to have the opportunity to open the “black box” and see for themselves how an AI system sees them.

SIGGRAPH: “Inside the Classification Cube” encourages participants to perform their behavior to the system and alter the way it sees them. What did your research say about how people portray themselves to society and how society portrays them?

AM: AI classification systems look at people’s external appearance and estimate aspects of their identity based on that. The oppressive notions that come with this kind of visibility have led to discussions and practices that suggest ways to avoid being seen by AI systems. Unlike these practices, the “Classification Cube” installation explores the idea of being seen through AI systems in an empowering way. Once inside the installation, viewers soon realize that their own body serves as one of the system’s inputs. This realization can quickly lead to the understanding that if we move inside the space or change our appearance, the system classifies us in a different way. This subtle exploration of a performative engagement suggests that spending time with such systems and making the effort to better understand the way they work may open an opportunity to control them and make them see us as we want to be seen. In that way, AI classification systems can be regarded as potential platforms for radical identity transformation, and can be compared to platforms such as biotechnology and virtual reality.

SIGGRAPH: Participants can compare their classification to that of others. What was the purpose of that component of your research?

AM: One of the screens inside the “Classification Cube” portrays a diverse group of animated figures who are performing all kinds of different behaviors and are subjected to the same AI classification analysis. These figures were built using a randomized process which combined different body elements and textures to form a variety of appearances. Representing these figures inside the space served a couple of different purposes:

  • The first was to allow viewers to examine classifications of bodies other than their own and, through that, to understand the effectiveness of the system. Viewers who compared themselves to these figures made comments like, “The system wasn’t wrong just about me, it also was wrong about these animated figures.” Such insights can open a broader discussion regarding how we, and not just AI systems, judge each other based on external appearance, which is probably one of the reasons for AI systems to be developed as they are.
  • The presence of these animated figures inside the cube also encouraged viewers to become more engaged in a performative manner. These figures are constantly moving; therefore, their classifications change in a dramatic way. Viewers could mimic the behaviors of the figures and become engaged with the system in a performative manner.

SIGGRAPH: Have you presented work at SIGGRAPH before? If yes, share your favorite memory from that experience.

AM: I had the honor to participate in SIGGRAPH’s online exhibition “The Urgency of Reality in a Hyper-Connected Age,” curated by Dena Eber in 2018. This exhibition seems to be more relevant than ever before as the new digital space takes the place of the physical reality. My piece in this exhibition is a performance that explores the social connections of a mother who holds her baby while being fully immersed in a virtual world. One of the things I liked best about participating in this exhibition is that it continues to be accessible and relevant, even now.

Avital Meshi is a new media artist who focuses on the way people connect with one another through new technologies such as video games, virtual worlds, and artificial intelligence. She creates interactive installations and performances that invite viewers to become entangled with technology in unusual ways. Her artwork provokes questions regarding the manifestation of identity and social and cultural shifts, which are mediated by contemporary computational media. Meshi received her MFA from the Digital Arts and New Media program at UC Santa Cruz and her BFA from the School of The Art Institute of Chicago. She also holds a BSc and an MSc in behavioral biology from The Hebrew University of Jerusalem. Her work was shown at Currents New Media festival in Santa Fe, New Mexico; Root Division Gallery in San Francisco; ACM SIGGRAPH; NeurIPS AI Art Gallery; Woman Made Gallery in Chicago; and more. Avital currently lives and works in the San Francisco Bay Area.

Related Posts