BARCELONA — Alarmed by the growing risks posed by generative artificial intelligence (AI) platforms like ChatGPT, regulators and law enforcement agencies in Europe are looking for ways to slow humanity’s headlong rush into the digital future.
With few guardrails in place, ChatGPT, which responds to user queries in the form of essays, poems, spreadsheets and computer code, recorded over 1.6 billion visits since December. Europol, the European Union Agency for Law Enforcement Cooperation, warned at the end of March that ChatGPT, just one of thousands of AI platforms currently in use, can assist criminals with phishing, malware creation and even terrorist acts.
“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” the Europol report stated. “As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home to terrorism, cybercrime and child sexual abuse.”
Last month, Italy slapped a temporary ban on ChatGPT after a glitch exposed user files. The Italian privacy rights board Garante threatened the program’s creator, OpenAI, with millions of dollars in fines for privacy violations until it addresses questions of where users’ information goes and establishes age restrictions on the platform. Spain, France and Germany are looking into complaints of personal data violations — and this month the EU's European Data Protection Board formed a task force to coordinate regulations across the 27-country European Union.
“It’s a wake-up call in Europe,” EU legislator Dragos Tudorache, co-sponsor of the Artificial Intelligence Act, which is being finalized in the European Parliament and would establish a central AI authority, told Yahoo News. “We have to discern very clearly what is going on and how to frame the rules.”
Even though artificial intelligence has been a part of everyday life for several years — Amazon’s Alexa and online chess games are just two of many examples — nothing has brought home the potential of AI like ChatGPT, an interactive “large language model” where users can have questions answered, or tasks completed, in seconds.
“ChatGPT has knowledge that even very few humans have,” said Mark Bünger, co-founder of Futurity Systems, a Barcelona-based consulting agency focused on science-based innovation. “Among the things it knows better than most humans is how to program a computer. So, it will probably be very good and very quick to program the next, better version of itself. And that version will be even better and program something no humans even understand.”
The startlingly efficient technology also opens the door for all kinds of fraud, experts say, including identity theft and plagiarism in schools.
“For educators, the possibility that submitted coursework might have been assisted by, or even entirely written by, a generative AI system like OpenAI’s ChatGPT or Google’s Bard, is a cause for concern,” Nick Taylor, deputy director of the Edinburgh Centre for Robotics, told Yahoo News.
OpenAI and Microsoft, which has financially backed OpenAI but has developed a rival chatbot, did not respond to a request for comment for this article.
“AI has been around for decades, but it’s booming now because it’s available for everyone to use,” said Cecilia Tham, CEO of Futurity Systems. Since ChatGPT was introduced as a free trial to the public on Nov. 30, Tham said, programmers have been adapting it to develop thousands of new chatbots, from PlantGPT, which helps to monitor houseplants, to the hypothetical ChaosGPT “that is designed to generate chaotic or unpredictable outputs,” according to its website, and ultimately “destroy humanity.”
Another variation, AutoGPT, short for Autonomous GPT, can perform more complicated goal-oriented tasks. “For instance,” said Tham, “you can say ‘I want to make 1,000 euros a day. How can I do that?’— and it will figure out all the intermediary steps to that goal. But what if someone says ‘I want to kill 1,000 people. Give me every step to do that’?” Even though the ChatGPT model has restrictions on the information it can give, she notes that “people have been able to hack around those.”
The potential hazards of chatbots, and AI in general, prompted the Future of Life Institute, a think tank focused on technology, to publish an open letter last month calling for a temporary halt to AI development. Signed by Elon Musk and Apple co-founder Steve Wozniak, it noted that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” and “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
The signatories called for a six-month pause on the development of AI systems more powerful than GPT-4 so that regulations could be hammered out, and they asked governments to “institute a moratorium” if the key players in the industry did not voluntarily do so.
EU parliamentarian Brando Benifei, co-sponsor of the AI Act, scoffs at that idea. “A moratorium is not realistic,” he told Yahoo News. “What we should do is to continue working on finding the correct rules for the development of AI,” he said, “We also need a global debate on how to address the challenges of this very powerful AI.”
This week, EU legislators working on AI published a “call to action” requesting that President Biden and European Commission President Ursula von der Leyen “convene a high-level global summit” to nail down “a preliminary set of governing principles for the development, control and deployment” of AI.
Tudorache told Yahoo News that the AI Act, which is expected to be enacted next year, “brings new powers to regulators to deal with AI applications” and gives EU regulators the authority to hand out hefty fines. The legislation also includes a risk-ordering of various AI activities and prohibits uses such as “social scoring,” a dystopian monitoring scheme that would rate virtually every social interaction on a merit scale.
“Consumers should know what data ChatGPT is using and storing and what it is being used for,” Sébastien Pant, deputy head of communications at the European Consumer Organisation (BEUC), told Yahoo News. “It isn’t clear to us yet what data is being used, or whether data collection respects data protection law.”
The U.S., meanwhile, continues to lag on taking concrete steps to regulate AI, despite concerns recently raised by FTC Commissioner Alvaro Bedoya that “AI is being used right now to decide who to hire, who to fire, who gets a loan, who stays in the hospital and who gets sent home.”
When Biden was recently asked whether AI could be dangerous, he replied, “It remains to be seen — could be.”
The differing attitudes about protecting consumers’ personal data go back decades, Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, a think tank focused on data protection, told Yahoo News.
“The EU has placed great importance on how the rights of people are affected by automating their personal data in this new computerized, digital age, to the point in which it included a provision in its Charter of Fundamental Rights,” Zanfir-Fortuna said. European countries such as Germany, Sweden and France adopted data protection laws 50 years ago, she added. “U.S. lawmakers seem to have been less concerned with this issue in previous decades, as the country still lacks a general data protection law at the federal level.”
In the meantime, Gerd Leonhard, author of “Technology vs. Humanity,” and others worry about what will happen when ChatGPT and more advanced forms of AI are used by the military, banking institutions and those working on environmental problems.
“The ongoing joke in the AI community,” said Leonhard, “is that if you ask AI to fix climate change, it would kill all humans. It's inconvenient for us, but it is the most logical answer.”