Artificial intelligence goes to school

Person works on code on a computer screen.
Person works on code on a computer screen. Emilija Manevska/Getty Images.

AI is transforming education from grade school to grad school and making take-home essays obsolete. Here's everything you need to know:

How is AI changing schooling?

It's raising questions about whether age-old methods of educating people can or should survive in a world where sophisticated answers to virtually any question are just a few keystrokes away. The most popular AI tool, ChatGPT, can generate impressive essays on any subject in seconds. Stephen Chaudoin, a professor of government at Harvard, said ChatGPT produces "B-plus, B-minus work" — and AI is evolving rapidly. ChatGPT4, the latest version of the chatbot, can pass the bar exam, score in the 99th percentile on the SAT's verbal section, and earn top scores on the Advanced Placement statistics and biology exams. As a result, some educators say the take-home essay is "dead." School districts in Los Angeles and Seattle have blocked ChatGPT from their Wi-Fi networks, and some universities warn students that using AI amounts to plagiarism. Teachers from kindergarten through graduate school are divided: Some say AI is the way of the future and contend that educators must adapt to the new reality, while others speak of it in apocalyptic terms. "It's just about crushed me," an English teacher in Florida said. "With ChatGPT, everything feels pointless."

Has it made it easy to cheat?

ChatGPT's release led to a rash of cheating scandals, including at a high school for gifted students in Cape Coral, Florida. A Santa Clara University student was caught using the chatbot to write an essay for an ethics course. "The irony is very clearly there," said the student's professor, Brian Green, noting that the essay had "a robotic feel." But even though AI-generated writing can be dry and formulaic, it's hard to know for sure that an essay wasn't drafted by a human. The chatbots essentially draw on everything on the internet, and their algorithms — whose workings are mysterious even to AI's creators — churn out a somewhat different response each time they're given the same prompt. A March survey of 1,000 undergraduate and graduate students found that half of them admitted to using AI on assignments or take-home exams, with 17% admitting they'd turned in assignments that were completely researched and written by AI. Only half of respondents said they consider it cheating to use AI to finish coursework and exams.

Is writing instruction doomed?

AI's writing still cannot match the most creative, original and stylish writing by humans, but many educators believe it will become a standard tool anyway. "The time when a person had to be a good writer to produce good writing ended in late 2022, and we need to adapt," said John Villasenor, a professor at UCLA. Antony Aumann, a professor of philosophy at Northern Michigan University, recalled reading "the best paper in the class" on the morality of burqa bans before growing suspicious about the essay's excellent examples, grammar and arguments. The student admitted to using ChatGPT. Aumann now plans to require students to write first drafts on classroom computers that block chatbots, then explain revisions in subsequent drafts.

What are other concerns?

AI frequently "hallucinates" and generates factually incorrect answers in a detailed, persuasive way — making up events, books and people that don't exist. When pressed for the source of an assertion, AI sometimes cops to making things up. Fears about cheating extend far beyond English class: AI is also capable of writing code, solving math problems, and completing science homework. Sam Altman, CEO of OpenAI, the San Francisco-based company behind ChatGPT, likens the technology to the calculator — an innovation that required changes to how math is taught, but by no means rendered math instruction unnecessary. "This is a more extreme version of that, no doubt," he said, "but also the benefits of it are more extreme, as well."

How can it help students?

Zachary Clifton, a high schooler in Kentucky, uses the chatbot to generate study guides to help him understand and remember his work. Some students use AI to clean up grammar mistakes. Others debate with ChatGPT before writing an essay in order to hone their arguments. AI can offer personalized instruction and shows great promise for students with special needs; for example, AI can convert textbook material into bullet points, charts and images to help students with dyslexia or attention deficit disorder. There's also optimism that AI can be a powerful, affordable tutoring tool. A May survey of 3,000 high school and college students found that 90% prefer studying with ChatGPT over a human tutor, and 95% said their grades improved after studying with ChatGPT.

What about teachers?

Overworked teachers can use AI to create lesson plans, grade assignments and generate multiple-choice questions. AI can offer personalized assistance to students as they work to complete assignments. Jaclyn Major, a sixth-grade teacher at Khan Lab School in Palo Alto, California, uses ChatGPT to help teach math, even though it occasionally makes obvious mistakes. "Remember, we are testing it," she tells her students. "We're learning — and it's learning."

Detecting AI-generated work

Millions of teachers have signed up for software that claims to be able to identify writing produced by AI. The makers of ChatGPT created such a service, which rated any submitted text as "very unlikely, unlikely, unclear if it is, possibly, or likely" AI-generated. The longer the text, the easier ChatGPT's creators say it is to tell the difference. Turnitin, one of the most popular plagiarism-detection services, claims to be able to spot AI's handiwork with 98% certainty. But Turnitin and its competitors are notorious for producing false accusations of cheating. Turnitin says one hallmark of AI-generated text is that the writing is "extremely consistently average." Of course, some real students produce consistently average work. The big obstacle to detecting cheaters is that each chatbot-generated essay or answer has variations that make it unique; some students mix AI-generated work with their own, making it even harder to discern. Ian Bogost, a professor at Washington University in St. Louis, investigated the effectiveness of AI-detecting software for The Atlantic and concluded that "identifying cheaters — let alone holding them to account — is more or less impossible."

This article was first published in the latest issue of The Week magazine. If you want to read more like it, you can try six risk-free issues of the magazine here.

You may also like

Homepage