Key takeaways from POLITICO’s 2023 AI & Tech Summit

A day after the government launched its historic lawsuit against Amazon — and the same afternoon that Meta launched an army of new chatbots — experts and leaders at POLITICO’s 2023 AI & Tech Summit offered a sweeping inside look at why AI regulation is starting to feel urgent, but also remains painfully difficult to get done in Washington.

Over the course of six hours, an array of experts, lawmakers and technologists drilled into the risks and opportunities that are quickly emerging around AI — and offered insights into where to watch as Washington (gradually) stirs itself to action.

Here are four takeaways:

1. The national risk-opportunity calculus for AI is still fiendishly hard

Experts see great big-picture opportunities in AI, from monitoring climate to aiding national defense, but also a vast array of risks, from discrimination to power consumption to — ahem — the end of humanity.

Nearly every speaker agreed that AI had become a top issue that is no longer possible to ignore — in part because it is fast becoming ubiquitous, and the most powerful models are so accessible to consumers.

Michael Kratsios, former U.S. chief technology officer and now managing director of the San Francisco-based Scale AI, said the release of ChatGPT “fundamentally changed the dynamic in Washington” and made the conversation around AI more urgent and concrete.

It is something that everyday Americans can touch, feel and play with personally,” he said. “Before it was just sort of this, you know, ‘Terminator’ dream in the movies or something that was happening, maybe in some factory somewhere through a robot.”

Jake Loosararian, the CEO of Gecko Robotics, an energy infrastructure monitoring company, warned that energy-hungry AI servers could burn more fossil fuels.

But even that isn’t clear yet: “I’ve been looking at the literature on this over the past couple of months,” said David Sandalow, a fellow at Columbia University’s Center on Global Energy Policy, “and my main conclusion is the data are really poor.”

Lakshmi Raman, the CIA’s director of artificial intelligence, said the agency was using AI to improve some basic tasks like language translation and making office work easier, but warned that AI was a double-edged tool — and the CIA’s adversaries were deploying the technologies as well.

“We’re thinking about things like deepfakes and disinformation as well as cybersecurity risks, how is AI being used to create more phishing emails, or generating more malware,” she said.

2. Lawmakers acknowledge they’re far away from comprehensive regulation on AI

Rep. Jay Obernolte said Wednesday his near-term priority as vice chair of the Congressional Artificial Intelligence Caucus is picking a lane on how to legislate the emerging technology.

“Are we going to do a broad-based approach with a new agency? Potentially like the EU has done? Or are we going to adopt a sectoral approach, where we empower our existing sectoral regulators to regulate AI within their sectoral spaces?” Obernolte (R-Calif.) said.

Obernolte’s basic questions reflected a Congress still in the early phases of regulation. In the Senate, Majority Leader Chuck Schumer this month convened an “AI Insight Forum” of tech leaders, not long after he laid out a framework in June for Congress to get on a path toward comprehensive regulation. But some lawmakers have urged for more efficiency in the legislative process to match the breakneck pace of innovation.

Sen. Todd Young (R-Ind.) said “It’s very likely we’ll pass some narrow pieces” of AI legislation, but hedged when it came to broader packages.

There’s a more immediate obstacle as well: Rep. Ted Lieu (D-Calif.), in another panel, said the possible coming government shutdown is stealing focus from crafting AI laws. “We’re trying to stop stupid stuff from happening, which means it's really hard to work on AI,” he said.

One of the challenges to regulating AI has been the age of lawmakers trying to rein it in. Sen. Ed Markey (D-Mass.) pointed to his own 47 years in office.

“I started in Congress before there were fax machines and today there are no fax machines. That's how long I've been around,” he said.

3. Agencies might be where the action is

Federal Trade Commission Chair Lina Khan, a day after filing an antitrust lawsuit against Amazon, put the AI industry on notice, warning that her agency will go after businesses if they are engaging in anticompetitive practices in the nascent field.

Khan said the FTC wants “to make sure the market understands there is no AI exemption to the laws on the books.”

That doesn’t mean that even the FTC has the resources it needs: Khan said she was looking for more resources from Congress to support the agency’s tech staff.

Kratsios and Obernolte both agreed that an agency-by-agency model of regulation, based on the uses of AI rather than licensing the underlying models, would be a practical way to start regulating AI, especially in the absence of any action on Capitol Hill.

“It's much more difficult to teach a new agency everything the FDA knows about protecting patient safety than it is to teach the FDA what it doesn't already know about regulating AI,” said Obernolte.

4. Washington is counting on self-governance from businesses right now

With AI regulation still fluid, industry players are making their own suggestions, and regulators are relying in part on their goodwill.

Anne Neuberger, deputy national security advisor for cyber and emerging technology for the Biden administration, touted voluntary commitments made in recent months by more than a dozen companies to manage the risks of AI, and took a sunny view of the technology.

“We're very committed to AI being a force for good to countries around the world,” she said.

She noted the White House is working on an executive order on AI to ensure that it would not increase bias or compromise national security.

“We've used that as a bridge to the regulatory work that Senator Schumer, leader Schumer is leading on the Hill right now,” she said.

Several technologists made comments that showed they are operating in a regulation vacuum.

“I think what's really important is for the industry to actually come together along with the stakeholders to develop shared norms and best practices around model access,” said Tom Lue, general counsel at Google DeepMind.

“I think quite a few of us have our own set of guardrails,” said Durga Malladi, senior vice president at Qualcomm Technologies.

Christina Shim, head of sustainability software at IBM, said “we focus on five pillars of transparency, fairness, explainability, robustness, privacy.”

She added that despite the issues of AI, “we would be sorely lacking” should the technology be cast aside. “It accelerates our journey, especially in climate change, where we don't have that much time,” he said.

Rebecca Kern, Mohar Chatterjee, Corbin Hiar, Ben Schreckinger, Derek Robertson, and Joseph Gedeon contributed to this report.