Yale University professor Alexander Gil Fuentes recently issued an unorthodox disclaimer on an assignment for his Introduction to Digital Humanities II course: “This final paper may be the strangest you’ve submitted for a grade in your whole life. You won’t be writing this one alone. You won’t be writing it with another person either — not directly in any case.”
Instead, students would be choosing a topic related to their current area of research and using ChatGPT to do the heavy lifting on the first draft. The students’ task would be to fact-check, correct, and enhance the AI’s suggestions. The completed project would need to include examples documenting the evolution from the initial AI-generated version to the final edited piece.
As technology continues to reshape traditional academic boundaries, Fuentes's initiative showcases the adaptive spirit of going with the flow of the tide instead of resisting it. Just two months after ChatGPT was released to the public, nearly one-third of college students admitted using the technology to assist with written homework. Close to 60% said they use it on more than half of their overall assignments. This trend underscores how important it is for educators to not only integrate generative AI into their teaching methods, but also to familiarize themselves with its capabilities firsthand.
ChatGPT isn’t the first technology to upend classroom dynamics — educators have long had love-hate relationships with tools like Wikipedia or even Spell Check. But much like calculators in the 1970s or the internet in the 1990s, generative AI “isn’t going back in the box,” says Jennifer Frederick, associate provost for academic initiatives at Yale. “We need to be graduating students who are adequately prepared,” says Frederick.
Still, fewer than 10% of schools and universities have formal AI policies or guidance in place. This leaves instructors navigating a thorny landscape — one that’s as ripe with potential as it is riddled with question marks.