Computers now complete many tasks that formerly required human thought—from playing chess to transcribing a phone call. These advances are called artificial intelligence or A.I. Many A.I. systems rely on "machine learning," in which software looks for patterns in a large group of images, texts, or other data so that it can "learn" to do something useful, such as identify faces in a photograph or find signs of cancer in a scan. A.I. can be biased or manipulative. Governments and corporations collect images from social media and public places, using this data to monitor identity and classify emotions. Personal photos and videos are stockpiled without permission. Flawed algorithms amplify bias. Faulty law enforcement tools can trigger harassment and false arrests. Measured and monetized, our faces have become valuable data, sold and circulated without public oversight. Come explore the possibilities and limits of A.I. by interacting with technologies such as face detection, emotion recognition, and eye tracking. The dramatic installation design of this exhibition features a canopy of abstract, synthetic reeds, suggesting an uneasy marriage of nature and technology.
In the future, people will control computers with their faces, eyes, and even their minds. Expression Mirror, created by Zachary Lieberman, invites you to use your face to interact with the computer, camera, and screen. Custom software seeks matches with the facial expressions of other visions, combining faces to generate unique social portraits. Dynamic white lines create an abstract, graphic interpretation of emotional states. Lieberman is one of the makers of openFrameworks—a tool for creative coding—and he is a founder of the School of Poetic Computation in New York City. EyeWriter, an eye-tracking interface designed for people with paralysis, won Design of the Year (Interactive) 2010 from the London Design Museum.
In Zachary Lieberman's Expression Mirror, the faces of viewers generate unique animations. Shown here are the data at work in his project. Face-detection software tracks muscle movements at 68 locations on the face. Emotion-recognition software interprets these movements. As you mold you face into different expressions, Lieberman's system builds a database of eyes, noses, and mouths, finds emotional matches among those parts, and generates patterns of abstract lines that behave differently in response to different facial expressions. Together, these elements create dynamic social portraits.
A.I. systems often make mistakes. Expression Portrait, created by R. Luke DuBois, invites you to experience the limits of A.I. You are asked to express an emotion—such as sadness, anger, or disgust. A camera records your expression and employs software tools to judge your age, sex, gender, and emotional state. Systems like these, used commonly in surveillance, are biased because they "learn" from databases that focus on limited populations or that classify people using narrow categories. DuBois uses music, art, and software to explore the social implications of technology. He is the director of the Brooklyn Experimental Media Center at the NYU Tandon School of Engineering. His visual works expand the limits of portraiture by linking human identity to data and social networks.
The machine learning algorithms used in A.I. seek patterns from large collections of images and videos. To calculate emotion for Expression Portrait, DuBois used the Ryerson Audio-Visual Database of Speech and Song (RAVDESS), which consists of video files of 24 young, mostly white, drama students, and AffectNet, which features many celebrity portraits and stock photos. To calculate age, DuBois used the IMDB-WIKI database, which relies heavily on photos of celebrities and other famous people. For race and gender, he used the Chicago Face Database, which adheres to a binary definition of gender (male/female), and a US-based defintion of race (white, black, Latinx, or Asian), which falls apart in our global, multiracial world. All these databases are biased, which explains the biased results.
Judging a person's character from their facial features has a long history, linked to racial stereotypes and criminal profiling. For centures, artists and scientists have measured and codified facial features. These practices often serve ideologies of white racial superiority and the belief that moral character is written on the face and skull. Jessica Helfand is a designer, writer, and historian. She is a founding editor of Design Observer and the author of numerous books on design and cultural criticism. Her latest book is Face: A Visual Odyssey (MIT Press, 2019).
Exploring the intersection of A.I., emotion detection, eye tracking, and bias, this immersive storytelling experience by Karen Palmer reveals how your gaze and emotions influence your perception of reality. Perception IO (input output) is a prototype and ongoing work-in-progress. This reality simulator invites you to evaluate your perceptions, become aware of your subconscious behavior, and reprogram it. You will take the position of a police officer watching a training video of a volatile situation. How you respond will have consequences for the characters. The system will track your eye movements and facial expressions. Analysis of your gaze and your expressions will be revealed, and you will be able to examine your own implicit biases. How comfortable are you with the idea that your perceptions of reality have real-life consequences? Would you bet your life on it? Karen Palmer calls herself the Storyteller from the Future. The research-based artist believes we live in the Age of Information, which has divided society, and will soon move into the Age of Perception, a period of greater understanding. Perception IO will enable you to see how human bias creates biased networks and to understand the need for regulating the use of artificial intelligence.