A few years ago, it took a lot of effort to make an artificially intelligent image look realistic; now, it can be done with just the click of a button. The percentage of Americans who can identify AI generated content has decreased due to advancements in artificial intelligence.
“As artificial intelligence rapidly evolves from a tool of convenience to a force reshaping entire industries, the question is no longer if it will change the world, but how far it will go in redefining what it means to be human,” a sentence written by ChatGPT, could you tell?
According to a 2025 Pew Research Center study, 53% of Americans are not confident in their ability to tell the difference between AI-generated content and human-made content, and 50% of adults think AI worsens people’s ability to form meaningful relationships.
In the same study, Pew Research Center found over 95% of adults have at least heard of AI, as of June 2025, so as chatbots like ChatGPT and Gemini are being introduced, it’s becoming harder to avoid AI in daily life.
Identifying AI has become an integral part of media literacy, and Dr. Paul Shovlin believes more people should understand it. Shovlin serves as an assistant professor of AI and digital rhetoric in a joint placement in both the English department and the McClure School of Emerging Communication Technologies at Ohio University.
“(AI) is wrapped up in what literacy means for us in higher education,” Shovlin said. “Some people talk about AI literacy, and I talk about AI literacy, but in my head, what I'm thinking of is this is a standard component of what it means to be literate in higher education and in the professional world going forward.”
Rather than viewing AI as a problem to be fixed, Shovlin sees it as an integrated part of the technology we use every day and understanding how to use it is essential.
Gwen McPherson, a freshman studying biological sciences, shares a similar perspective and emphasizes a nuanced approach to AI in education.
“I use AI often when it comes to learning new chemistry equations,” McPherson said. “It shows me step by step how to do the problems, and I’m a visual learner, so it helps me understand it better. But I don’t use it to cheat, because I want to feel a sense of accomplishment and feel capable of doing problems on my own.”
AI can also be used in more ways than equations and mathematics; it can also be incorporated into the arts and plans to be at OU. Starting Monday, OU Visual Communication students are showing a generative AI storytelling project exhibit titled “Generative Sparks,” showcasing for two weeks and ending Oct. 3.
The exhibition is based on work from the visual communication class, generative AI. According to the event website, the exhibition will showcase how the imagination of OU students, merged with the tools of AI, can make new visual communication.
On Wednesday, OU Scripps Communication students will also host an open forum, with the title of “Is AI really killing creativity?”
At OU, the conversation doesn’t end at creativity; as Ohio’s top “green school,” according to the Princeton Review, there’s no question that environmental impacts are discussed.
AI is poised to represent 6% of America’s total electricity usage in 2026, and further research has shown AI exacerbates environmental inequalities globally, according to the Harvard Business Review.
Shovlin points out; however, those who are concerned about its environmental impact still must understand which platforms are the best practices for them.
“I read in a study that just came out a couple of weeks ago that said people who are most likely to use AI are also most likely to have lower AI literacy,” Shovlin said. “From my perspective, what that says is, if you have real ethical problems with this technology, the more you can learn about it in a way that feels comfortable to you, the better off you're going to be equipped to have agency in your life.”
Others, like Mickey Zheng, a freshman studying biological sciences, have concerns centered around academic integrity when it comes to AI.
“People are starting to rely on AI way more than they are supposed to, and feel very dependent on it,” Zheng said. “There should be a limit on how we use AI, especially if it’s not for beneficial reasons.”
Despite disagreements ranging from environmental concerns to ethical academic uses, Shovlin believes it’s imperative to put aside biases against or for AI and simply understand it will be a large part of technology for the foreseeable future.
“I think we need to start thinking carefully, especially at institutions like Ohio University,” Shovlin explained. “How can we train up folks in the community around us to develop AI literacy, and I think we have an opportunity here in ways in the past, because this aspect of literacy is so new that we can democratize it in ways that we haven't with literacy before. And I want to be careful to mention in that statement that those differing perspectives of skepticism and being a proponent are both important, and you can't just do one.”





