Video, Backstory: Expression Portrait
The machine learning algorithms used in A.I. seek patterns from large collections of images and videos. To calculate emotion for Expression Portrait, DuBois used the Ryerson Audio-Visual Database of Speech and Song (RAVDESS), which consists of video files of 24 young, mostly white, drama students, and AffectNet, which features many celebrity portraits and stock photos. To calculate age, DuBois used the IMDB-WIKI database, which relies heavily on photos of celebrities and other famous people. For race and gender, he used the Chicago Face Database, which adheres to a binary definition of gender (male/female), and a US-based defintion of race (white, black, Latinx, or Asian), which falls apart in our global, multiracial world. All these databases are biased, which explains the biased results.
It is credited
Courtesy of R. Luke DuBois.
Our curators have highlighted 4 objects that are related to this one. Here are three of them, selected at random: