How the world’s courts are dealing with AI

Semafor Signals

Insights from the Australian Strategic Policy Institute and Wired

NEWS

UK patents can only be granted to products developed by humans — not artificial intelligence — the country’s Intellectual Property Office ruled Wednesday.

It’s the latest in a series of AI-related court rulings in several countries as regulators look to mediate the implications of rapidly-developing AI capabilities. While many Western countries are prioritizing human-generated work, other countries like China are protecting AI-generated content as the race to lead AI development intensifies.

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

AI inventions as “trade secrets” instead of patents

Sources: Tech consultant Shelly Palmer

The UK court’s decision does not expand the scope of inventions to include AI-generated ones and does not fully explore the technology’s “intellectual contributions,” Syracuse University professor Shelly Palmer wrote. While legal systems would need to be updated to keep up with AI technologies, Palmer suggested that inventors could circumvent courts by keeping their AI-generated products a “trade secret” instead of using patents which compel developers to share their ideas with the public and risk helping competing products. “If AI is truly valuable in assisting with the creation of better mouse traps… why the need to share?” Palmer wrote.

Chinese courts’ leniency on AI reflects its world domination goals

Sources: Australian Strategic Policy Institute, Semafor

A Beijing court’s recent ruling that AI-generated content is protected by copyright “advances a wider effort in China to surpass the U.S. to become a global leader in AI,” according to the Australian Strategic Policy Institute (ASPI). U.S. chip restrictions on China have hampered Beijing’s ambitions, but the ruling incentivizes its tech companies to use AI, reflecting the country’s pro-business stance in AI regulation, a Chinese law professor said. If Chinese courts adopt a more lenient approach to AI-related legal challenges, it could blur the lines between “AI-generated and human-crafted worlds,” ASPI warned. This has risky implications for AI development models which rely on “high-quality data sourced from human-generated content.”

How to train AI: “Get permission”

Sources: Wired, Associated Press

The fight over big tech using creators’ content to train AI models poses an “existential threat” to artists and writers who worry that these models will eventually replace them in the workforce, Wired reported. The concern was amplified during this year’s Hollywood screenwriters’ strike, which ultimately gave the writers a temporary win in how production studios use AI content. The most ethical way to train AI is “very simple: get permission,” said Mary Rasenberger, CEO of the Authors Guild, which is developing a tool to help AI companies compensate its members to license their work for training AI models.