Liu, who is also the managing director of the Michigan Institute for Data Science, shares that previous to working on “Fair Representation in Arts and Data,” she had already been kicking around the idea of how artwork could be used to demonstrate to the public, “in a very intuitive way, both the power of data science and the harm of data science.”
“We know that data science and artificial intelligence (AI) systems have implicit bias and that momentum needs to be built up around the topic,” Liu says. “For a few years now we have thought about educating the public in some way. When we found out that there was funding for pilot projects, I knew it was a chance to be a part of doing something really substantial.”
Working Together to Make Change
The project’s initial findings are certainly thought-provoking. Some key highlights were not too surprising to the research team. For instance, they uncovered that the algorithms often failed in recognizing females in the collection and that the collection is very white-heavy.
“We essentially did face detection over UMMA’s entire collection,” Brueckner explains. “We found all the faces in the collection and then we applied algorithms that are available publicly and open source, which helped identify race classification and gender classification to those faces.”
She said that it’s very hard to understand sometimes why these algorithms are making certain decisions, but the researchers have found the results really interesting. She points to one unexpected discovery: when left to an algorithm to categorize visual input, the most representative face found in the collection is a painting of a clown.