Chris Schillig: A real person wrote (most of) this column

A retired colleague texted the other day to ask if any of my students had used artificial intelligence to write their essays.

Honestly, I don’t know, but it’s certainly possible.

Artificially generated student work is a growing concern among educators, at least based on the number of emails I’ve received, discussions I’ve heard, and articles I’ve read in the past month alone.

More:Chris Schillig: Readers sound off about tipping Mickeys

The latest focus of the let’s-stay-one-step-ahead-of-student-dishonesty debate is ChatGPT, a program from OpenAI that “generates human-like responses in a conversational context,” a line that was itself generated by the ChatGPT program when I asked it to write an essay about the topic.

Like many a penny ante dictator, the chatbot referred to itself in the third person as it explained that “as it interacts with users, it can improve its understanding of language and become more adept at generating appropriate responses.”

Chatbots themselves are nothing new. Chances are good that you’ve communicated with them through the online customer service departments of large companies like AT&T or Starbucks. Or whenever you’ve asked Siri what song is playing on the radio.

Education has a long history of bucking new trends in technology on the grounds that they are bad for learning. I was a student during the calculator wars of the late ’70s and early ’80s, when administrators and teachers couldn’t decide if the device was the savior or antichrist.

Before that, teachers debated erasable pens, and before even that, fountain pens. In the dim past, the profession no doubt criticized the written word itself because it short-circuited memorization.

Still, ChatGPT and programs like it are enough to give even the most progressive educator pause. As a test, I asked it to write an essay about student mental health, using scholarly sources. In less than a minute, it spit out a 466-word response that referenced the World Health Organization and the American College Health Association.

More:Chris Schillig: Will schools' fights over phones fade away?

Sources were both paraphrased and quoted directly. Full citations, in pristine American Psychological Association format, were included at the end of the piece.

More importantly, however, the writing itself was sharp and clean, integrating the outside evidence in a way that I work hard to instill in students throughout the semester.

So, am I worried about ChatGPT and other artificial intelligence programs in the classroom?

Of course, but not overly so.

For one thing, the new technology speaks to the importance of something most language arts teachers I know are doing already: flipping the classroom. This means students write while they are at school, under the guidance of teachers who help them formulate claims and integrate evidence (and who monitor that the work is the students’ own).

Additionally, the issue emphasizes the need for more student choice in selecting topics. Students who are intrinsically invested in a topic are more likely to complete their own research and do their own writing, versus students who are mandated to write about things in which they have little interest.

The current kerfuffle also speaks to the necessity of ongoing conversations with students about what AI can do and what it cannot. When I made a second attempt to have ChatGPT generate a response (write a 500-word essay on the lack of effectiveness of current animal welfare laws in Ohio), the results were less promising. Sentences were awkward and repetitive, and the argument was presented in the stilted five-paragraph-essay style (albeit in six paragraphs).

Still, it probably would earn a passing grade in most classrooms.

Financial Times notes that student access to ChatGPT means colleges and universities — I would add junior high and high schools — need to become more creative in their assessments. Not every course or topic needs a formal essay for a final project.

What won’t work is doubling down on originality-detection software like Turnitin or trying to ban programs like ChatGPT. The first does nothing but enrich the coffers of the same people who develop transgressive programs in the first place; the second is like trying to close the barn door after the horse is already out.

That last sentence ends with a cliche, maybe the best proof that some of us are still doing our writing the old-fashioned way.

To see the full chatbot responses mentioned above, visit https://tinyurl.com/mpurx9mb.

Reach Chris at chris.schillig@yahoo.com. On Twitter: @cschillig.

This article originally appeared on The Alliance Review: Chris Schillig: A real person wrote (most of) this column