How educators are learning to use AI in the college classroom

Together, teachers and students are getting first-hand experience with the potential and pitfalls of generative AI.

By Stephanie Walden — September 26, 2023

Yale University professor Alexander Gil Fuentes recently issued an unorthodox disclaimer on an assignment for his Introduction to Digital Humanities II course: “This final paper may be the strangest you’ve submitted for a grade in your whole life. You won’t be writing this one alone. You won’t be writing it with another person either — not directly in any case.”

Instead, students would be choosing a topic related to their current area of research and using ChatGPT to do the heavy lifting on the first draft. The students’ task would be to fact-check, correct, and enhance the AI’s suggestions. The completed project would need to include examples documenting the evolution from the initial AI-generated version to the final edited piece.

As technology continues to reshape traditional academic boundaries, Fuentes's initiative showcases the adaptive spirit of going with the flow of the tide instead of resisting it. Just two months after ChatGPT was released to the public, nearly one-third of college students admitted using the technology to assist with written homework. Close to 60% said they use it on more than half of their overall assignments. This trend underscores how important it is for educators to not only integrate generative AI into their teaching methods, but also to familiarize themselves with its capabilities firsthand.


RELATED: Artificial Intelligence comes to the office 


ChatGPT isn’t the first technology to upend classroom dynamics — educators have long had love-hate relationships with tools like Wikipedia or even Spell Check. But much like calculators in the 1970s or the internet in the 1990s, generative AI “isn’t going back in the box,” says Jennifer Frederick, associate provost for academic initiatives at Yale. “We need to be graduating students who are adequately prepared,” says Frederick. 

Still, fewer than 10% of schools and universities have formal AI policies or guidance in place. This leaves instructors navigating a thorny landscape — one that’s as ripe with potential as it is riddled with question marks. 

Illustrations by Derek Abella

Illustrations by Derek Abella

Upending the status quo in academia

Fuentes is just one of many professors at Yale and higher-ed institutions across the country currently incorporating generative AI in the classroom — from creative writing and philosophy to computer science and even nursing. Aside from some of the obvious applications like essay writing, professors are using generative AI to help students with creative problem-solving, translation, coding, and research. Some features of generative AI platforms including advanced speech-to-text capabilities might even help neurodivergent learners or students with disabilities thrive.

Among educators, generative AI is evoking a mix of enthusiasm and trepidation. “A lot of the early headlines about AI in education were very alarmist: ‘The essay is dead, and students will never write anything again,’” says Frederick. “But we need a more holistic approach to thinking about AI than just through the lens of plagiarism.” 

Stephanie Atkins, teaching and learning center manager at Northern Wisconsin Technical College (NWTC), says the institution has taken a stance of cautious optimism. “Our approach has been, ‘Let’s get in and play. Let’s see how this technological advancement can help our students be successful,’” she says. 

One NWTC faculty member recently deployed AI in a legal issues class to help students brainstorm topics for their weekly papers. Others have used it for analysis and critical thinking exercises, framing the technology’s tendency to “hallucinate” (i.e., make things up) as a teachable moment by encouraging students to question and fact-check AI outputs. “In this capacity, students are moving from a rote knowledge base to an analysis and creation space, and that’s really where deep learning happens,” says Atkins.

A new teaching assistant

Educators are also using the technology to streamline the administrative parts of their own jobs. Karl Reischl, director of technology and information security at NWTC, noted he’s planning to use generative AI to come up with quizzes for an upcoming class he’s teaching on virtualization. “I can feed it all of my PowerPoints and other material I’m using in class and have it generate new quiz questions way faster than I could on my own,” he says. 

Additional possibilities include creating lesson plans, writing emails, and even role-playing through different educational scenarios. Atkins says this type of assistance can be a huge help for instructors unfamiliar with the ins and outs of academia. 

“We have faculty that come to us from industry, and do not always have a teaching background,” she says. “Generative AI helps them figure out where to start.”

“If we don’t teach kids how to use AI tools, they’re going to learn fast, and they’ll learn the wrong things.”

— Tony Kashani, professor of education, Antioch University

Educating the educators

For educators looking to get their feet wet with AI, Reischl suggests hands-on experience. “The first step is just to use it, try it out, and learn different ways to ask it questions,” he says. For those looking for more in-depth reference material, there’s a slew of content available in the form of YouTube videos, articles, and online courses.

Tony Kashani, a professor of education at Antioch University who is currently writing two books on AI, suggests educators get more meta about it, asking ChatGPT — or other platforms like Google’s Bard — for advice on how to interact with them efficiently. Industry conferences can also be an incredibly useful resource.

Many universities are also developing their own content to help faculty navigate these murky waters. Yale’s Poorvu Center for Teaching and Learning, for example, leads workshops, faculty orientation discussions, panels and forums, and one-on-one consultations with instructors interested in dabbling in the technology for their courses and also for internal research and operations. NWTC produces webinars and compiles a newsletter full of AI resources. Atkins and her team also host formal professional development sessions that get educators experimenting with the tech.

With great power…

That said, there are still many areas of the technology that deserve a healthy dose of skepticism. One specific issue with AI is that it’s only as good as the data it’s trained on — and a lot of training data is biased, inaccurate, or fails to adequately take into account different cultural or linguistic intricacies. Because of this, Yale’s Frederick cautions against ascribing too much authority to AI outputs without independent verification. “They can be a good starting point for conversations about things like bias and representation and what’s missing and how to make that better,” she says.

There are also a number of ethical quagmires associated with the tech. How does the digital divide come into play, especially for students who don’t have access to generative AI in their K-12 years? Are researchers — or artists — whose work is used to train generative AI systems owed compensation? And how can educational institutions fight against mis- and disinformation perpetuated by these platforms?

…comes great responsibility

Kashani cautions it’s important to be upfront with students about the risks inherent in this powerful new technology, and equip them with the tools to use AI as a force for good. “If somebody is taking woodshop and you hand them an electric saw, you have to teach them the safety mechanisms first,” he says. “If we don’t teach kids how to use AI tools, they’re going to learn fast, and they’ll learn the wrong things.” 

Kashani notes that, for AI to have a net positive impact on higher education, that guidance needs to include helping students understand not only what AI can do, but what it can’t.  

“It can be an editorial assistant, or a tutor that’s available to you 24/7,” he says. “But it cannot replace you, as the person who is thinking, imagining, and has the ability to process information with a humanistic point of view."


What are… large language models?