Matt Groh is a computational social scientist and PhD candidate at the MIT Media Lab in the Affective Computing group. He looks at how machine predictions—about, for instance, whether a photo is real or engineered—influence human decisions, and how human decisions about what to add to AI training datasets influence machine predictions. He considers these questions as they occur in the realms of misinformation, medical diagnoses, and creative problem solving. He also studies how to blend AI and human problem-solving to build the most effective socio-technical decision-making systems.
During the Dana Center planning stage at the MIT Museum, Groh taught the sold-out “Make A Fake” class to a group of 15 people. The group met on three Monday evenings in February for ninety-minute sessions. Groh’s energy, animation, and constant interaction with the students engaged the diverse group—made up of people who’d grown up not only in the U.S. but also in Turkey, Canada, and Argentina; who ranged in age from an MIT undergraduate to a retired hotel manager; and who worked for organizations like the Massachusetts ACLU and the Harvard School of Education. All but one of the fifteen joined in a focus group after the last class to discuss the class.
Q: You were very engaged with the students throughout the class, taking their temperature on the various ethical and societal issues you raised. Nonetheless, did anyone manage to say something during the focus group that surprised you?
Matt Groh: Bill [a retired hotel manager] mentioned that he is much more afraid of misinformation now, specifically the kind produced by generative AI, than he was three weeks ago. That surprised me. All of this is very normal and familiar to me, and I thought I was sharing with the class information that would help people recognize why they might not need to be as scared of misinformation as the media narrative often suggests people should be. But even though he talked about making a video of himself that he shared with his friends—so he could see the technology as a tool for humor, too—after seeing firsthand how easy the technology makes it manipulate videos and images, he was scared.
All the positive use cases that people mentioned surprised me too. There was such a diversity of them. For example, Audrey, the ceramicist, teaches ceramics, and she wants to show her students text-to-image programs like Dall-E and Stable Diffusion to help them with the creative process, to visualize what they might make, but also to think about pottery they could creat that might be more interesting or creative than what they’ve done before. The student from Turkey talked about using those programs to envision what his new home and garden might look like. I hadn’t considered uses like those.
Q: The goal of the Dana Center project is to develop activities that benefit from the contributions of neuroscientists, ELSI scholars, and public engagement, providing value to people from all three domains. Did your class show you ways that these domains might influence each other? Did you take away unique value from working on an activity where these domains intersect?
Matt Groh: Yes. It was very interdisciplinary. We used tools available to everyone online, enabled by computer science, to understand aspects of cognitive science and neuroscience and how all of that relates to the ethical, legal, and social implications of the new technologies we looked at—technologies that generate fake content, whether it’s textual content, by way of a large language model AI tool like Chat GPT, or a computer-engineered deep fake video.
I’ll give you an example. As my colleagues and I are learning, visual information is really important for a person who is trying to determine if a video is real or fake, or a photo of what seems to be a human is real or fake. So lessons from visual science and neuroscience inform our work as we think about how people can distinguish real from fake. For instance, we’re learning that human beings process faces in a way that is special, using a part of our brain that seems to be focused on that task. Knowing from the realm of neuroscience that humans have a specialized face processing center is useful as we think about how people can recognize deep fakes—because deep fakes, right now, are all about manipulating faces. Because we have that specialized face processing center, we humans might be well-equipped to recognize deep fakes. But contextual clues will also help us figure it out. By that, I mean, if we come across a video we’re not sure about, we can ask ourselves questions like, “Would this public figure say this thing? Would they say it now? Why? What could their goal be in saying or doing this?”
Understanding the ways that people distinguish between real and fake is important, because it has implications ethically, legally and socially.
Q. If the MIT museum gets the Dana Center grant, museum manager Ben Wiehe may have neuroscientists, ESLi scholars, and public engagement specialists work in overlapping to-year cycles. If you could work with the museum for two years on a Neuroscience and Society project, what would you like to do? And how has the planning you've done, and the events you've done, informed what you'd like to do?
Matt Groh: Museums have the opportunity to enable more citizen science, so one thing that could be really neat is engaging the public in more research opportunities. Museums can create exhibits that are digital in some way so that they can capture data that researchers can then evaluate. By capturing that data, museums can run randomized experiments, using people who are engaging in different exhibits—like an exhibit I did at the museum, True or False?, which invited visitors to try to decide whether the videos and photos we made were real or fake. You would want to make sure people knew exactly what they were getting into, of course. But that data could be very interesting. Imagine if you gave visitors a lot of time to play around with text-to-image tools like the ones we used in class, Dall-E and Stable Diffusion, and tracking how people use them. That could be a really useful thing for scientists to understand, and it would be research that would be extremely unique to the MIT museum—where they would be a lot of traffic and a lot of diverse traffic. Part of doing good science is making sure people are engaged, because if they’re not paying attention, that affects what you’re asking them to do. But if we presented something like this at the museum, we’d do it in some gamified, very engaging way.
Photography by Ashley McCabe