Floridians see promise — and potential perils — in artificial intelligence

Artificial intelligence has generated an avalanche of interest, ranging from dramatic and dire warnings to exuberant optimism in the 13 months since ChatGPT was launched, sparking conversations and uncertainty from dining room tables to corporate boardrooms to the corridors of government.

Everyone, it seems, has a theory about what AI can do, should do, might do or shouldn’t do.

AI is creeping into the political world, with more to come.

Never Back Down, the super political action committee supporting Gov. Ron DeSantis’ presidential campaign, used AI to generate a voice that sounded like former President Donald Trump, the Republican frontrunner — and used it in an ad.

Never Back Down had reportedly earlier used AI to superimpose video of fighter jets to make it appear as if they’d flown overhead during a DeSantis speech.

Separately, Kevin Aslett, who studies AI at the University of Central Florida, said it could be used to create and rapidly spread disinformation during the 2024 election season.

Floridians divided

Floridians are split over the potential promise, and pitfalls, of AI and are concerned about the risks for rapid development of the technology.

A University of South Florida/Florida Atlantic University poll this year of Florida adults found 46% believed that AI would improve American society – with 46% disagreeing.

Plenty of people who believe it would improve society nevertheless voiced major concerns:

  • 75% said they were “worried that AI could pose a risk to human safety.”

  • 54% are “worried that AI could threaten my employment in the future.”

  • 55% said AI “is being developed too quickly in the United States.” Far fewer, 28%, said it was being developed at an appropriate pace. And just 4% said it was being developed too slowly.

  • 70% would support “a temporary pause on AI development.”

“People are really divided on the issue,” said Stephen Neely, an associate professor at USF’s School of Public Affairs. “There’s a lot of hope about the promise of AI, but there’s a lot of concerns about whether or not the oversight is there.”

Related Articles

Widespread interest

Companies have been developing AI for years. But it burst into widespread public consciousness when one company, OpenAI, released ChatGPT in November 2022. Google followed with a release of Bard.

“There’s a lot of engagement around this. People understand this is here, this is now, this is going to change our world. And they’re really engaged in trying to understand it better,” Neely said. “I can’t tell you how many conversations I’ve had in the last couple of months. People [say they] ‘need to get up to speed on generative AI. I’m really falling behind.’ People have an understanding that this is the future.”

Generative AI can take raw data “say, all of Wikipedia or the collected works of Rembrandt — and ‘learn’ to generate statistically probable outputs when prompted,” according to an online explanation from IBM. A Florida International University article explained that AI systems allow computers to “produce human sounding written language and convert descriptive phrases into realistic images.”

People are paying attention, Neely said. On many of the questions dealing with multiple aspects of AI in the USF/FAU poll, which also covered several other including recreational marijuana and COVID vaccines, Neely said there weren’t large shares of people unsure about their views.

In open-ended questions, in which people were asked to share their thoughts on AI and the implications for the healthcare system and the impact on their own health, they provided “more thought and in depth responses” than he’s seen in 10 years of interviewing people. Responses were far greater than when people are asked open ended questions about politics, he said.

Legitimate concerns

Florida experts who research and think about AI said concerns are legitimate.

“Peoples’ fears are founded,” said James W. Jacobs, associate director of policy and partnerships at the Florida Center for Cybersecurity based at USF.

“Anytime a new technology comes along, we can’t really predict with 100% certainty how it’s going to turn out.”

Jacobs added that he’s optimistic about the future of AI. “I’m excited about the application,” he said, though he adds a caveat: “I’m not going to sit here and say it’s all going to be roses.”

Jacobs and Aslett, in separate interviews, raised the Terminator scenario. They said the world doesn’t face extinction, a la the Terminator movies, at the hands of out-of-control AI.

“The Terminator, or the worry that there’s going to be superhuman knowledge in a computer that’s going to be able to take over the world, it’s not here, and we’re not close to that,” Aslett said.

Aslett, an assistant professor at the University of Central Florida who specializes in political communication and computational social science, works at UCF’s Cybersecurity and Privacy Research Cluster.

Jacobs said “we’re not on the verge of Terminator. We’re giving AI more credit than what it is capable of. We’re not there yet.”

But, he added, there are potentially ominous uses of AI if it’s in the hands of people who want to do harm.

The scariest, he said, is the possibility of a bio risk — using AI to mutate a virus into something more deadly and contagious. “These risks aren’t there now. But there’s a lot of potential risks. And that’s what I think scares people: the potential of the technology. And also not knowing much about the technology.”

“The risk of a super human computer taking over the world is not here in the short term. The real risk is for nefarious actors to misuse AI models for nefarious purposes, or for people to misuse AI models to solidify discrimination or even make it worse, make discrimination worse than it already is,” Aslett said.

“I think AI has the ability to revolutionize our society in so many good ways,” Jacobs said.

Threat to jobs

A more immediate concern for many, as shown in the poll’s results, is the potential implications for people’s jobs.

People are concerned that AI will replace them in the workplace.

“AI is not replacing people now,” Aslett said. “There is low risk at the current moment. But there is a lot of potential risk.”

Aslett said companies are already trying to adapt AI for use in the workplace.

“But I think these companies have struggled to replace human workers. But again that doesn’t alleviate the fear that some day or someday soon AI is going to replace workers,” he said.

AI could be deployed to make some kinds of jobs more efficient. And that could translate into employers hiring fewer workers. For some kinds of work, there may not be people needed to prepare the initial work, but people would still be needed to revise and check the work, Aslett said.

“There has to be some verification that these AI models are doing what you actually want them to do,” Aslett said.

That’s partly because, for reasons that are poorly understood even by experts, AI can “hallucinate.” In other words, it can make things up and produce results that include falsehoods.

Jacobs said technological advancements often eliminate jobs. But, he added, “it also opens up all these brand new fields that people didn’t see coming.”

For example, he said, there will be work for people who know how to ask the right questions of AI and assess the work product. There “still has to be someone trained and can command AI in a way that gives the needed product.”

It may or may not be reassuring, but when ChatGPT is asked if it’s going to threaten jobs, it claims it won’t:

“Not at all! I’m here to assist and complement, not replace. While I can handle various tasks and provide information, I lack the personal touch, intuition, and real-world experience that humans bring to their jobs. Think of me as a helpful sidekick, freeing you up to focus on more creative, complex, and human-centric aspects of your work. Embrace the collaboration, and we can make a great team!”

Across party lines

AI hasn’t — so far — become a partisan political issue.

The USF/FAU poll didn’t find statistically significant differences between the way Democrats and Republicans answered the questions.

Almost identical percentages — 52% of Democrats, 50% of Republicans and 51% of independents — said they are worried AI could jeopardize their jobs. And 73% of Republicans and 73% of Democrats said they would support a temporary pause on AI development. Independents were slightly less likely to support a pause.

“It was a little surprising to see no partisan differences at all. We’re in such a hyper partisan moment. Everything is political,” Neely said.

Neely and Aslett said they believe a central reason for the broad agreement is that political leaders haven’t yet taken firm positions that place themselves in different camps based on political affiliation.

If that happens, with Republican leaders advocating certain approaches and Democrats different approaches, the subject of AI could become polarized like so many other issues in America.

There were, however, gender differences.

  • AI will improve American society: Men, 55% agreed and 37% disagreed. Women, 38% agreed and 53% disagreed.

  • AI could threaten my employment in the future: Men, 45% agreed and 47% disagreed. Women, 62% agreed and 30% disagreed.

  • AI is being developed too quickly in the U.S.: Men, 48% agreed. Women, 62% agreed.

Neely said surveys often find men are less willing than women to acknowledge concerns about many issues. He said the findings may mean that women could require more convincing before buying into implementation of AI.

The University of South Florida and Florida Atlantic University poll, sponsored by Florida Center for Cybersecurity, surveyed 600 Florida adults from Aug. 10 to 21 using an online survey through market research firm Prodege MR.

The poll has a margin of error of plus or minus 4 percentage points. Because subgroups (such as Democrats and Republicans or men and women) are smaller than in the overall poll, the margins of error are higher for those groups.

Political disinformation

Aslett said he sees political implications to the advances in AI. Aslett, whose work includes studying how people engage with AI models, sees a “huge risk” as soon as the 2024 election from possible AI disinformation.

He said generative AI is lower the cost to produce content that appears to be generated by humans. “That’s probably one of the biggest risks this upcoming year,” he said.

AI could be used to generate an enormous amount of information alerting people to made-up election fraud, and it could become omnipresent online.

A single “nefarious actor using AI” could produce hundreds of articles on the same topic out on the web “so when people search for a story they find more and more information corroborating that claim,” Aslett said. “Are we prepared for 1,000 stories to come out that says something that’s false?”

In an era in which trust in traditional fact-based news media organizations has declined, the disinformation could prove especially effective.

Jacobs adds another potential concern: the prospect that so-called deep fake videos could show something that looks believable but isn’t true.

Speaking for himself, not the Center for Cybersecurity, Jacobs urged caution. “Don’t believe everything you’re going to see online over the next few years.”

Anthony Man can be reached at aman@sunsentinel.com and can be found @browardpolitics on Facebook, Threads.net and Post.news.