Florida universities want to crack down on AI use, but how can they detect it?

OpenAI’s artificial intelligence chatbot ChatGPT’s public launch in November last year sent ripples throughout not only the tech world, but virtually every aspect of everyone’s life. At least, that’s what hardcore supporters would have most believe.

In truth, as is often the case, reality is lesser than its embellishments. AI chatbots, such as Google’s Bard and Microsoft’s Bing AI chatbot, are amazing at helping users with ideation, helping them learn to code or speak a new language and it can be very helpful in providing useful answers to basic questions. But it’s often better to consider AI-powered chatbots as unreliable narrators; every answer should be taken with a grain of salt and fact-checked rather than taken strictly at face value.

These are all points University of Florida Provost Joe Glover addressed in an update he provided to the University of Florida Board of Trustees regarding the use of AI initiatives at UF.

“As everyone knows, generative AI hallucinates. ChatGPT makes mistakes, it doesn’t give the right answers. It is subject to flights of fancy and depression,” Glover said.

What is ChatGPT? Everything to know about OpenAI's free AI essay writer and how it works

OpenAI, the artificial intelligence research company that developed ChatGPT, describes its text models as “advanced language processing tools that can generate, classify, and summarize text with high levels of coherence and accuracy.”

But while ChatGPT is a powerful tool, it’s just that — a tool. It requires human touch to harness its capabilities and provide a safety net when it gets things wrong.

As AI’s capabilities creep toward its near-frightening potential, Florida universities like UF continue to work toward infusing its abilities into everything they do. Such as UF’s supercomputer, HiPerGator, which uses AI to tackle agricultural and environmental problems.

But universities are also keeping an eye on how students are using AI tools and how they could be using them to essentially plagiarize writing assignments.

What is generative AI?

When plugged into ChatGPT, the answer to this question is an extensive, four-paragraph answer that most won’t be interested in reading — another issue that plagues AI chatbots is that users need to be smart about their prompt “engineering.”

In simpler terms, generative AI is a large language model (LLM) that uses data it was trained on to generate similar text, images, audio and video based on a prompt provided by a user.

The crux here is that the data AI is trained on must be accurate, and data can change over time as new details are learned and new ideas are created.

AI fears grow: Fear over AI dangers grows as some question if tools like ChatGPT will be used for evil

The other issue is that generative AI relies on two “competing” models to generate results and compares the responses against each other. The accuracy of the response the user sees is contingent on how well one of the models, called the discriminator, can differentiate the real data it was trained on against the generated data. If the other model, called the generator, can fool the discriminator then it will print that response.

This issue is where the more egregious errors have popped up. Such as a New York lawyer who referenced legal cases that didn’t exist after he used ChatGPT for legal research.

How is AI being used in schools and universities?

The use of large language models and machine learning is prevalent throughout our lives whether or not we recognize it. We use machine learning when we snap a photo of our latte to post on social media before we begin our day.

Both Google and Apple train the image processing in their cameras to adjust not only the camera’s settings but also apply basic image processing techniques that take the raw data from images and spit out a more palatable image using machine learning. In its Pixel phones, Google has even gone so far as to include a custom chip that helps accelerate this process to make snapping a photo, well, snappier.

Similarly, universities use AI to help streamline everything from how they use data and learning analytics to how they approach early childhood education and manage their facilities.

Some universities, for instance, have helped their students become better students by using tools to assess their skill levels and create tailored instruction to help them become more proficient.

AI tools also help in academia. UF’s HiPerGator supercomputer — a gift from NVIDIA, which designs and manufactures graphics processing units like those used in AI systems — is used by researchers at UF and other universities to study agriculture and the environment.

How are students using AI in schools and universities?

College-ranking website BestColleges in March found that 43% of college students have experience using AI and 22% said they’ve used it to complete exams or assignments.

Most students seem to use AI for reasons outside of their schoolwork, according to BestColleges, which found that 57% of students said they do not intend to use or continue using AI to complete assignments or exams.

AI-generated images: AI-generated images already fool people. Why experts say they'll only get harder to detect.

And of the students who have used AI tools for their schoolwork, half of them said they completed the majority of the work themselves, potentially reinforcing Glover’s point that AI tools still require someone with working knowledge of a subject to properly use them.

The Florida Gulf Coast University Board of Trustees (FGCU) met in June to discuss AI and its “threats and opportunities to teaching and learning.” It admittedly recognized that the school hasn’t seen an increase in cases of student misconduct overall or cases of academic integrity after its spring semester, but stated it has anecdotal evidence of students using ChatGPt in English Composition assignments.

Can plagiarism detectors detect AI?

Social media is saturated with anecdotal stories of students receiving zeros on assignments after professors and teachers accused them of turning in assignments written by AI.

Universities have used plagiarism-detection software for years, and some of those software claim they are capable of detecting AI, too — like TurnItIn.com, which FGCU now uses to help screen students’ usage of AI in their work.

The important question to answer is whether or not any of these AI detectors are accurate. The answer, unfortunately, is about as gray as the plethora of ethical questions surrounding AI itself.

OpenAI’s API Key is a tool directly from ChatGPT’s creators that can be used to detect AI-generated content, provided that the content is at least 1,000 words. And even then, the tool isn’t perfect as AI-generated text edited by humans can slip past undetected.

Other popular tools have since popped up, like GPTZero, an AI-detection tool created by an undergraduate at Princeton University and a former software engineering intern at Microsoft, Edward Tian.

This tool analyzes certain characteristics in text, such as its randomness (also known as its perplexity) and its uniformity (known as burstiness) and compares it to how AI-generated text would appear.

GPTZero appears to be slightly better at detecting AI-generated text, but it’s still not perfect and there is no empirical data indicating just how accurate the tool is.

How will Florida universities combat AI?

Most Florida universities have taken a precautionary stance on AI use by students. FGCU's Dean of Student's office will "begin tagging academic misconduct cases for instances of Generative AI usage, and an explicit syllabus statement will be generated for faculty use to prevent cases of unauthorized assistance," according to the June 13 agenda item.

How exactly universities move forward to combat students' AI use is still to be determined.

This article originally appeared on Pensacola News Journal: AI use prevalent at Florida colleges now learning how to deal with it