Artificial intelligence has been around for decades. In 1951, computer science pioneer Christopher Strachey wrote the first successful AI program. Since then, it has been woven into our everyday lives — from bank fraud detection to flu season prediction to facial identification on our smartphones.
Yet as large language model-based tools like ChatGPT have become accessible to the public, AI has garnered more and more negative attention, especially because of the prospect that students may use it to complete assignments.
Cynthia Furse, an electrical and computer engineering professor at University of Utah, uses AI in the classroom. She teaches her students how to use genetic algorithms, which use computer-coded natural selection, to design antennae.
Engineers have been using AI-like genetic algorithms since the 1960s, so ChatGPT and other text-generating bots don’t strike Furse as particularly revolutionary, at least in their current form. They can’t produce data or cite sources accurately, so their ability to help write, say, a scientific report is still limited.
“You’ve just about written the thing by the time you tell ChatGPT what to write for you,” Furse said.
Other professors, like Chris Babits at Utah State University, believe generative AI will provoke a “paradigm shift” in college education — not because they fear their students depending on it, but because it presents educators with an opportunity to reassess how they teach.
AI raises the bar for teachers and students
“I think the role that a professor should play is not to be afraid of whether students cheat. What we should do is become more aware of what it means to teach our students in the first place,” said Christa Albrecht-Crane, an English professor at Utah Valley University.
As the chair of the Writing Program Committee at the university, Albrecht-Crane has been leading discussions with her colleagues about how they can teach their students that doing their own writing is to their advantage.
Part of that initiative is separating papers into separate assignments, like brainstorming, outlining, rough draft, revision and peer review. ChatGPT can assist with all these steps — Albrecht-Crane has even introduced it in her classroom as a “collaborator” — but breaking an essay down may make students less likely to rely on it to write the whole thing.
“The emphasis is not the product of writing, but the emphasis is the process of creating writing that students feel like they are invested in,” Albrecht-Crane said.
Text-generating AI may also prompt professors to design assignments that require more creativity, collaboration and complex analysis from students. Babits, a history professor, plans to implement this philosophy in his “America in the 1960s” class this upcoming fall.
Expecting that students will use ChatGPT to complete “lower-level” tasks for the class, Babits is requiring them to create mock museum exhibits for the National Museum of American History and social media campaigns to market those exhibits based on the assigned readings.
“So you can see that is much more difficult and challenging, but probably more meaningful than sitting down and writing three essays on three different books over the course of a semester,” Babits said.
Some professors see ChatGPT as just another new technology — in the same category as calculators and the internet. Writing used to require knowledge of spelling and grammar, until spell checkers and Grammarly came along. USU computer science professor John Edwards argues that text-generating AI is doing the same thing on a higher level, which is prose.
“But I don’t think it takes away the highest level — the most important part — of how to construct an argument or … the creativity behind writing,” Edwards said.
Incorporating ChatGPT into college curricula
Halfway through this past spring semester, Babits gave his history of sexuality class an optional assignment: the students were to feed ChatGPT a question they had about the class material, then analyze the strengths and weaknesses of the responses.
After their initial amazement wore off, students started noticing flaws in the bot’s output — it was relatively general, lacked depth and even got some information wrong. Some students saw it as a helpful starting point, but no one said they would trust it to do an entire assignment.
One possible conclusion to be drawn from this is that students may be less likely to use ChatGPT to do all their work if they understand its shortcomings. Albrecht-Crane believes educators have a responsibility to teach their students how to use ChatGPT ethically in order to prepare them to enter a workforce that will likely be transformed by AI.
What if students do use ChatGPT to cheat?
Most of these professors admitted to the inevitability of at least some students using generative AI to get out of doing their own work. And while some believe banning ChatGPT will cause more harm than hurt, others see it as the most viable way to deter students from depending on it.
Edwards believes college students should be exposed to AI at some point, but banning it on a course level would not be detrimental.
“ChatGPT is not changing the world. It’s one step in our decades-long progression in using technology in innovative ways,” he said. “It’s a big step, to be sure, but I have absolutely no problem with an English teacher not using it.”
Returning to traditional testing rather than giving students open-book, take-home exams may be in the cards for educators who worry about their students losing their critical thinking skills to AI.
One of the main problems with banning generative AI use in the classroom is it is nearly impossible to enforce. It’s easy enough for students to tweak ChatGPT responses enough to sound human-written, and AI detection software often concludes a paper is at least partly AI-generated when the student wrote the whole thing themselves.
That’s why Edwards and his colleagues at USU are exploring a different method of plagiarism detection: keystroke tracking. Along with their assignment, computer science students would submit a log of the back spaces, copies and pastes and all the other keys hit in the process of typing their code.
Having a window into their students’ coding processes would allow professors to detect red flags — a copy and paste might suggest that a student used a code-generating AI program like Copilot. Edwards’ research also demonstrates that students are less likely to plagiarize when they know their keystrokes are being tracked.
Although Edwards and his team are still working out some of the issues with keystroke logging, like the potential privacy breach and the anxiety it could cause some students, it may present a more effective approach to AI detection in education.
What happens when AI advances?
Many argue that the real concern is not AI in its current state, but the more capable AI that the future will likely bring. Earlier this month, the Center for AI Safety released a statement emphasizing the gravity of AI’s advancement, receiving signatures from hundreds of tech experts.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement says.
Other experts have chosen to take a step back from AI. Geoffrey Hinton, the “Godfather of AI” whose research was essential to the development of software like ChatGPT, has quit working for Google and expressed concern that AI might grow out of control.
He even said AI’s potential to wipe out humanity is “not inconceivable,” the Deseret News reported.
This view however, is not universal. As an engineer, Furse sees hope in the future of generative AI. If ChatGPT eventually has the ability to produce more accurate information and cite sources, for example, that would be useful and allow students and teachers to focus on more advanced tasks.
“Saving us time to do the more inventive stuff would be really fantastic,” Furse said.
Edwards contends that if AI does become destructive, it will not be because it became too intelligent, but because humans will have failed to adapt.
“We need to elevate what makes us human beings,” he said. “And if we do that, if we’re continually improving as human beings, then I don’t think computers will ever catch up.”