Creative Work


Sophia Brueckner on Tech, Humanities, and Futures

Sophia Brueckner on Tech, Humanities, and Futures

In February 2017, Stamps Assistant Professor Sophia Brueckner was one of eight featured speakers at TEDxUofM 2017: Dreamers and Disrupters.

Brueckner has been computing since the tender age of two on her Commodore 64 and has a deep, lifelong interest in reconciling how technology impacts what it means to be human. During her talk, Brueckner encouraged audience members to embrace a “critically optimistic” philosophy when considering the role of technology in our lives, both presently and in the future.

A former software engineer at Google, Brueckner left Silicon Valley in 2010 to pursue an MFA in Digital + Media from RISD and an MS in Media Arts and Sciences from the MIT Media Lab. Explaining the shift from engineering to visual art/design, Brueckner stated: “I wanted to have a greater role in envisioning the future of technology.”

The Stamps Communications Team caught up with Brueckner by email during her Artists’ Residency at Autodesk Pier 9 to learn more about technology-driven futures, the ethical explorations of science fiction, and what it means to be a critical optimist.

In your own words, what does it mean to be “critically optimistic” about the role of technology in our lives?

With regards to technology, I see people’s attitudes divided between two unhealthy extremes. Many, especially those involved in the development of technology, demonstrate an attitude of blind optimism where they believe technology can solve all problems and improve all aspects of life. They will focus on what a technology can do without considering what it takes away or how it might be misused. At the other extreme, I see an attitude of unconstructive pessimism where people can only see how technology is ruining things, and they don’t propose alternative directions. People on opposing sides struggle to communicate, and neither of these extreme attitudes will result in better technologies. Instead, I believe we need to cultivate a sense of critical optimism, where hopefulness is tempered with criticality. We need people who have enough hope to envision the possibilities for what can be built, but who also have a healthy dose of criticality such that they can see where their technologies might go wrong and attempt to either prevent or mitigate the negative consequences.

Why is critical optimism important?

During my residency at Autodesk Pier 9, I’ve been working on a new wearable technology inspired by science fiction and cyborgs. I’m mindful that wearable interfaces have the potential to be incredibly intrusive and controlling, so I’ve recently been rereading the works of Donna Haraway. This quote of hers really stuck with me while I designed the interface:

“Technology is not neutral. We're inside of what we make, and it's inside of us. We're living in a world of connections — and it matters which ones get made and unmade.”

My experiences in Silicon Valley taught me that technology is certainly not neutral. Technology is imbued with the values of those who designed and built it. When a technology becomes popular, it becomes very difficult to change its fundamental architecture, and it’s essential to be able to think in extrapolative terms about what happens when a particular technology scales. However, neither utopian or dystopian thinking are adequate in making thoughtful choices in the early stages of the design process. A critically optimistic approach, which is a more balanced type of speculative thinking, is necessary.

How do you safeguard your own creative work against the black-and-white rationalization of technology as “good” or “evil” to embrace critical optimism?

My background is technology heavy, and that could result in my showcasing my technical skills in every project. However, I’ve become careful about not using technology when it is not the best solution. When I do decide to build a new technology, I consider how it would ideally be used as well as how it might be misused. Kentaro Toyama, one of my collaborators in the School of Information, eloquently described technology as an amplifier of intent, both good and bad. Knowing this, the design of an interface must encourage its positive uses and minimize misuse. However, this comes with the caveat that both the designer and the user need to be aware that the interface is exerting control through its structure.

Your creative work seems very interested in how technology can interpret, or misinterpret human emotion. Crying to Dragon Dictate — a computer-generated transcription of you crying, read aloud as poetry is a great example of this.  Would you like to see a future where tech can better interpret our emotions?

Crying to Dragon Dictate was a turning point for me because it revealed my own naivitée about my relationship with technology. Up until then, I was an ideal user/consumer, and I enthusiastically wanted to apply technology to everything to make it better. When my repetitive stress injuries made me less than an ideal user, I was able to more clearly see the structure imposed on me by technology’s interfaces.

I cried for five minutes to Dragon Dictate. I used the Mac OSX screen reader to read back the result.

Like many programmers, I know what it means to be “in the zone”. It’s like an ecstatic flow state where you are so fluent in computer programming that you can express your intentions as code without having to translate them. However, to achieve this “in the zone” state, you have to adapt yourself to the interface and make yourself think like a computer. As your thinking changes to fit the technology, you lose some of your humanness. In this state, you can can only think what the programming language allows.

This doesn’t just happen with programming. More generally, user experience designers strive for this in their designs…the goal is for the user to be conscious only about what they are trying to do and forget the interface exists.

Also like many computer programmers, I ended up with repetitive stress injuries to my wrists. Unable to type, I was forced to use Dragon Dictate, a popular speech recognition program, to interact with my computer. This limitation interrupted the seamless, nearly ecstatic relationship I had with computers, and, at one point, I spent an hour attempting to type only a few sentences using the software. Extremely frustrated, I cried while the speech recognition software was still running. The text in this piece is the result of Dragon Dictate’s interpretation of my crying. I used the Mac OSX screen reader to read the text aloud.

This was a turning point in my relationship to technology…it was like I went from seeing the world through sparkly clean, invisible glass to glass so filthy all you can focus on is the dirt. Technology once seduced me into feelings of godlike, superhuman empowerment, but I became painfully aware of its controlling interfaces shaping my thoughts and behavior. Popular user experience design textbooks define user experience design as the design of behavior, and they state that successful UX design should be invisible to the user. The user seemingly executes his intention completely naturally without any awareness of the interface guiding his behavior. Knowing who the people are behind these interfaces, I no longer mindlessly embrace current technologies.  

In your TEDx talk, you speak about the instinct for some to stray towards “techno solution-ism,” the idea that all things can be solved with an app or a device. In your creative work, this seems to be a concept that you explore as well. Specifically, the networked devices Empathy Box — and its wearable companion Empathy Amulet — connect anonymous people through shared human warmth. I’m curious to know if this work is a societal prompt for the creation of more “human-like” tech — or if this is an example of how close connectedness is a job best undertaken by humans.

Technosolutionism is tendency to believe that all problems can be solved with technology, and it’s an attitude that’s common in the tech industry. A simpler way of saying that is that if you have a hammer in your hand everything starts to look like a nail. If you make smartphone apps, it can seem like every problem can be addressed with a smartphone app. If you work in the tech world, it is easy to forget there are solutions to problems that don’t involve collecting more data, more sensors, better AI, etc.

Sophia Brueckner, Empathy Amulet, 2014

In my own work, I purposefully choose to design technologies that facilitate interactions that are impossible without technology. Technology should provide new ways to interact with people without replacing our real-life interactions with people. The Empathy Box and Empathy Amulet are good examples of this. There are many related projects that attempt to alleviate the pain of long distance relationships by simulating being close to someone in person through haptic interfaces. This will always fall short, and I have no interest in trying to simulate real-life interactions with a technology. Instead, my devices connect you through warmth with a group of strangers in order to change your perspective on your connectedness with people in general, especially those outside of your social circle. Since you will never know the people’s names, how they look, or the details of their lives, this type of connection would be impossible in real life.

I’m curious to know what you think of products like Paro, the robotic stuffed animal seal that’s been in use in nursing homes and care settings since 2003 to give patients a relaxing, animated plush companion to engage with. On one hand, this would feel like we’re outsourcing empathy to a technological device — but on the other, many reports find that the product really seems to help patients.

Robots like Paro worry me because they don’t actually care about or empathize with someone. They simulate caring and empathizing. If a person simulated or faked caring about another person, we’d probably call that manipulation because it’s not genuine. In a cute robot, we don’t immediately see it as manipulation. Instead, we are amazed by the novelty and technical sophistication. It’s true that studies are showing that these types of technologies are a comfort to patients who are socially isolated, and it can be tempting to say that anything that helps is a good thing. However, that’s another example of falling into the trap of black-and-white thinking. Instead, we should be asking “What if there is more than one way to help, and is one way preferable to another?” While simulations of caring might help lonely people feel better, what is the societal impact of shifting the burden of care from people to robots? How will that change family, friendship, and community? Addressing the same issue of social isolation in nursing homes, I saw an experiment that involved running a preschool out of a nursing home, which turned out to be great for both the residents and the children. Possibly, we would rather focus on approaches like this instead.

What is a real-world example of “human-like” tech that you hold up as a model with healthy societal implications?  

One example I like to refer to is OXO’s Good Grips tools that were designed for people with rheumatoid arthritis. They were designed for a smaller population of people with a particularly severe issue, but, by making sure the product worked for them, the product was better for everyone else.

In the high-tech/software world, I can’t think of a perfect example of a specific technology that is “human-like”, humane, empathetic, or even good because the technology tends to be less specific and in flux. However, I can think of people who are actively struggling with how their technologies fall short and are constantly iterating on their work to make it better. This resistance to complacency is what I’d choose to highlight because technology less static than ever. Recently, scholar and author Donna Haraway described this as “staying with the trouble.”

At Stamps, you teach an undergraduate course called “Science Fiction Prototyping,” where students read the same science fiction book and create functional prototypes based off the reading. Why is science fiction such a good lens for students to learn about creative invention?

Reading science fiction is like ethics class for designers, inventors, and engineers. Science fiction looks at current technological and societal trends and extrapolates them into the future. It speculates on the consequences of these trends, both good and bad, if they continue unchecked.

In my course, I use sci-fi books, short stories, and films as a prompt for students to reflect on what happens when technology scales. We discuss the authors’ visions of the future and then apply that same kind of speculative thinking to the emerging technologies we see in the world today. My primary goal is to teach the students to avoid black-and-white thinking when it comes to technology. Unconstructive criticism is avoided by requiring the students to design new technologies and build prototypes that are as functional as possible. Naive optimism and technosolutionism are avoided by requiring that all iterations of projects undergo the serious critique you find in art and design schools merged with extrapolative thought process of sci-fi authors. Ultimately, the students learn how difficult and important it is to be both earnest and honest…earnest in trying to do good, honest about how things can and likely will go badly, and willing to engage in the messy navigation of the good and bad over time.

What science fiction book do you wish you could make required reading for every student the world over?

Ursula K. Le Guin is my personal favorite, but, in general, I would strongly encourage people to read more science fiction by diverse authors so that they are exposed to a diverse set of possible futures. By seeing a greater range of possibilities, we know more confidently which sort of future we prefer and which we want to avoid, and only then are we able to be deliberate about what we choose to build.

To learn more about Sophia Brueckner’s work, visit her website at or watch her TedxUofM talk.