Opinion: AI used for cheating is inevitable, but schools can meet the challenge

We recently gave ChatGPT the Test of Understanding of College Economics, a standardized test that's been in use in the field of economics for the past 50 years. The results floored us. ChatGPT scored in the 91st percentile in microeconomics and the 99th percentile in macroeconomics when compared with economics students. It is better on these tests than almost all the students.

It is also an incredible cheating machine. While anti-plagiarism tools can compare a student’s work with existing sources, ChatGPT can generate original content in seconds, making plagiarism almost impossible to detect. Furthermore, ChatGPT has several advantages over non-AI forms of cheating. It is free, simple to use and generates content much more quickly than earlier methods.

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21 in Boston. (Photo: Michael Dwyer/Associated Press/File)
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21 in Boston. (Photo: Michael Dwyer/Associated Press/File)

This presents two big new problems. How should teachers assess our students, given the boost that AI gives to potential cheaters? And what should we teach, given that AI can “learn” certain kinds of information better and far faster than our students can?

We could pretend that our students won’t take advantage of ChatGPT to cheat, and many of them won’t. We would be poor economists, though, if we didn’t reckon with how the new AI software changes the whole incentive structure. For many of the traditional kinds of assignments we might give, cheating is now the lowest-cost way to earn a high grade. And not cheating, in some circumstances, puts students at a disadvantage, as they are likely to do worse than their counterparts who rely on AI. In the jargon of economics, the dominant short-run strategy is to cheat.

Economists describe this type of game as a prisoner’s dilemma. Each student can secure the best possible course grade by cheating. This comes at the cost of learning, however, which is best achieved by studying for the test or writing the paper.

One way to reduce the amount of cheating is to give in-person, proctored exams. It’s not impossible to cheat even in this context, but the goal would be to make the challenge of cheating great enough that most students will rationally decide that their time is better spent studying instead of finding ways to cheat.

We can design new types of exams and assessments that begin from where ChatGPT leaves off. We should require students to critique and evaluate written material rather than simply replicate and rehash. Assessments that evaluate higher-level thinking skills like analysis, evaluation and creation can help engage students in meaningful learning experiences while making it more difficult for ChatGPT to circumvent the process. Experiential learning, including more internships and more empirical research opportunities, can also play an expanded role in economics education.

We should also learn to embrace ChatGPT and other AI systems as continuations, rather than disruptions, of a long process through which technology has supported teaching and learning. We already use an online course management system to communicate. We give online homework that is automatically graded, play Kahoot! games in class to discern in real time how well students understand what is being taught, write on a document camera or tablet instead of a chalkboard, and access online databases with thousands of educational clips to effectively reinforce key learning points. Teaching and learning economics are easier than they have ever been because of technology.

Education overall must embrace this technological disruptor to better prepare students for the jobs of the future. We don’t know yet what this should look like with these new systems, much less with their successors, which will be capable of replicating a broader range of human-like traits. Our job as educators, however, is to stay ahead of the curve, or at least not too far behind it.

G. Dirk Mateer is a professor of instruction at the University of Texas. 

Wayne Geerling is a professor of instruction at the university. 

This article originally appeared on Austin American-Statesman: Opinion: AI used for cheating is inevitable, but schools can meet the challenge