Washington confronts a new AI fight

  • Oops!
    Something went wrong.
    Please try again later.

Washington’s mounting struggle to deal with artificial intelligence took a sharp turn Wednesday morning, as House members wrestled with new and unsettling questions raised by human-like digital brains, including: Who even owns what a robot creates?

A day after OpenAI CEO Sam Altman grabbed Washington’s attention with a call to regulate his own industry — and a senator opened with a public stunt he then called “one of the more scary moments” in Senate history — political leaders are diving deeper into the thorny details of how, or even whether, to regulate these new systems.

The hearings followed a private White House meeting earlier this month with tech-company leadership, as well as a raft of new initiatives intended to test and guide AI technology. Altman also met privately with the House AI caucus and House leadership from both parties to talk about AI yesterday.

But as concern mounts about AI’s possible effects on people’s lives, the economy and even human survival, American leaders are finding they have relatively few tools to put safety reins on new technological platforms. Even in Europe, where officials are armed with far stronger data protection and privacy rules, governments have struggled to find a balance between consumer protections and their support for tech-industry competitiveness.

As the Washington conversation takes shape, fault lines are already emerging in the seemingly bipartisan call to create new regulations.

The most concrete idea to emerge on Tuesday, a new agency to license the most powerful AI platforms — supported yesterday by senators of both parties as well as Altman himself — was dismissed afterward as “baffling” by a key tech lobbyist, and “seriously flawed” by Hodan Omaar, a senior policy analyst at the Center for Data Innovation, a nonpartisan think tank.

“It’s baffling that the U.S. would even consider a restrictive licensing scheme for artificial intelligence development, let alone the idea that some international governing body could get countries to comply with such a thing,” said Steve DelBianco, president and CEO of NetChoice, a trade group that represents companies including Meta, Google and Amazon.

Without a clear path forward on new AI regulations for now, a House subcommittee dove into a different question on Wednesday, one of growing importance as the new technology develops and threatens to swamp whole industries: Who owns it all?

While a bulk of the focus has been on the safety and privacy concerns of generative AI, especially for so-called large language models like the ones behind ChatGPT and Google’s Bard, copyright issues have become an increasing concern.

Many publishers and artists have realized just how much of their work is used in training AI models, and lawyers have begun circling the complex, and new, issue of who will own the work created by machines.

At stake could be billions of dollars with the new industries that could rise around the AI models — and the financial health of publishers and other creators whose work the AI both uses, and could replace.

Though not as high-profile as the Altman hearing, which attracted top officials and throngs of journalists, the House Judiciary Subcommittee on Courts, Intellectual Property and the Internet aired out some key emerging concerns during its meeting Wednesday.

One of the biggest issues is how to compensate or credit artists, whether musicians, writers or photographers, when their work is used to train a model, or is the inspiration for an AI’s creation.

Ashley Irwin, president of the Society of Composers and Lyricists, put it bluntly in the hearing: Generative AI poses an “existential threat to our livelihood.” His concerns echo those of artists, writers and other creative professionals increasingly worried that AI tools could easily imitate their work — and put them out of business.

“It’s very Orwellian how the tech industry manages to change terminology on us,” Irwin said. “It’s not data and content to us, it’s music, it’s photographs. It’s not file-sharing, it’s stealing. Very simple.”

Beneath the AI models lies a vast amount of training data — which is often material created by, and owned by — someone besides the AI developer. One key issue that lawmakers are being pressed to address is who should be compensated for all that material, and how it would work.

Subcommittee Chair Darrell Issa, whose business background is in the electronics industry, proposed one mechanism, a database to track the sources of training data: “Credit would seem to be one that Congress could mandate — that the database input could be searchable so you would know that your work or your name or something was in the database.”

Issa has criticized the federal government for being too slow to act on tech matters, and has previously gone on the record to express his anxieties about AI use to take and create copyright.

Issa told POLITICO after the hearing that he’s planning to hold field hearings in the future in Nashville and potentially Los Angeles about the impact of AI on artists. And his next Judiciary subcommittee hearing will focus on generative AI and patents.

“Even though they might turn into different legislation, the entity that will be regulated — the AI entity — will actually be the same entity,” he said. “So what we do in copyright cannot be ignored between patentability and copyright for the actions of AI.”

Rep. Hank Johnson — the top Democrat the subcommittee — offered a sober take on whether the U.S. government can still get ahead of the curve on generative AI: “No. Because it’s so far ahead of us now that we’ll play catch up. And we’ll do our best to not restrain technology but to temper its excesses. You’re not going to get this genie back in the bottle. You’re not going to slow it down. I don’t see Congress doing that.”

The core question at the heart of today’s hearing, though, was whether Congress could help decide who would benefit from this suddenly immensely profitable technology.

Multiple panelists said they would be OK with their work being used to train AI — but only if they were credited and compensated for it. But a key question emerging now is, when does the use of an artist’s work to train AI constitute “fair use” under the law, and when is it a copyright violation under current law.

That distinction will be critical for artists going forward, many of whom rely on royalties and licensing, especially in fields that may not drive significant income.

“For years and years and years, the arts people have traded on: ‘I’ll do it because I love doing it,’” Irwin said. “And it’s true, we love doing it. But at some point, love doesn’t feed the family — and that’s the real harm here. There has to be a way to co-exist.”

On the prospects of coming together for bigger-picture regulation, Rep. Nancy Mace (R-S.C.) — who has become a key Republican voice on AI issues, and plans to hold her own field hearing in Silicon Valley on generative AI next month — cautioned that new agencies or global regulations might be a bridge too far, especially with an election cycle looming.

Today’s topic, to her, was more on point: “For the next 12-24 months, disclosure — the source of information or the source of images and videos — that’s where I think our focus should be. Those kinds of things that make sense to be done fast before elections.”

Mallory Culhane contributed to this story.