AI for good: Auburn faculty changing conversations about humanity, technology within community
Faculty from the Auburn University College of Liberal Arts (CLA) and Samuel Ginn College of Engineering want to change the way we talk about artificial intelligence (AI) by drawing on their expertise in the arts, humanities and sciences.
People once believed that the invention of the radio would end war and the automobile would make horses obsolete. Cars eventually became more popular than horses, but it took decades, and the radio reported on wars it did not prevent. Technology historians such as Associate Professor Xaq Frohlich said AI is just another example of how the revolutionary impact of new technology is historically overestimated.
He co-organizes community outreach events called “AI Cafes” to reframe the narrative about AI, emphasizing how this technology can be developed for the public good.
“My commitment as an intervention is not to say AI is good or bad, but to show how this polarized, good-versus-bad framing of AI is shaping public opinion,” Frohlich said. “I’m seeing an opportunity for us to create a different kind of context than one that has almost entirely been defined by heavily capitalized corporations. We work at Auburn, and we have an interest in creating public interest, so what would AI look like if it was being driven by a public interest or community approach?”
At the latest AI Café in November, audience members expressed concerns over AI negatively impacting privacy, creativity and critical thinking skills while spreading bias and plagiarism. Hopes for AI include better quality of life, a positive impact on organizational health and solving complex problems.
The cafes provide space for community members to map their feelings around AI by asking questions such as: What kind of work do you want to support, and how does AI change that? When is it okay to use AI? Will AI make labor more efficient?
For example, someone can use AI to generate animation. But Assistant Professor of Animation Sara Gevurtz, who studies experimental innovation, said skilled animators are still required to actually use the AI-generated product.
“To use AI effectively, you still need all the foundation. My students still need to know the all principles of animation and timing, they still need to know how to draw a figure, otherwise they can't make an AI mesh usable,” Gevurtz said. “So, in my field, AI is like an upper level. It's not that I'm against it, but I've got to give them the foundation first before we can have a productive conversation about how to actually use it.”
Confronting the role of artificial intelligence in society requires asking philosophical questions, referencing the history of emerging technology and considering how humanity contributes to the arts.
Associate Professor of Art Lauren Woods teaches figure drawing and painting. AI can generate art in a way that represents an average across huge bodies of existing work, but Woods said it’s important to engage the mind and body to create art that people enjoy for its originality.
“Anything can be art. AI can be used to make art, but as humans, we don’t have to think it’s good. It’s about how we consume it,” Woods said. “Something that interests me regarding AI and my art practice is the idea of embodiment and embodied expression: how important it is to us as human beings that artwork is being made through the human body.”
The human touch is also critical in storytelling. Joan Harrell, journalism lecturer, AI Café co-organizer and CLA’s director of strategic programs and initiatives, said it takes a human to sort through generated news content to ensure that biased or “fake news” isn’t distributed.
Further, Harrell said no one can tell the human story like a human being, who can represent communities with honesty, empathy and ethics that are often skewed in artificially generated content.
“What many people don’t realize is that AI has been used for decades, and that is why we have the perpetuation of stereotypes of coverage for communities and people,” Harrell said. “Algorithms impact every single aspect of our lives, including the way that we receive the news. We also have to think about how AI is blurring the lines of truth and accuracy in critical information that we’re receiving.”
The idea of human-centered computing is not new, though it’s evolved to human-centered AI. AI Café co-organizer and Charles W. Barkley Endowed Professor in the Department of Computer Science and Software Engineering Cheryl Seals has taught it all, from predictive to generative to empathetic versions of artificial intelligence.
She said educators have a responsibility to prepare their students to bring relevant skills to the industry, which will involve AI, just as previous cycles of breakthroughs and breakdowns have made engineers adapt.
“They said the world is going to end in 1999. We made it through that. We will make it through this for the students,” Seals said. “We’re going to always be innovators. That’s why we’re here. Be intentional about your learning. Don’t be worried about what’s going to happen. You have a good foundation. We’ll learn these tools. We’re going to have to change because life is changing.”
In February 2026, CLA will host “The AI Nexus: The Arts, Humanities and Engineering Converge,” a conference that invites participants to explore the dynamic intersection of artificial intelligence with the arts, humanities and engineering.
Learn more about the AI Nexus at the College of Liberal Arts website.
Tags: History Art and Art History Community, Outreach and Engagement Communication and Journalism