Willy Wonka fiasco highlights risks of AI-made ads

Semafor Signals

Supported by

Insights from VentureBeat, 404 Media, and Business Insider

The News

A disastrous Willy Wonka pop-up experience in Glasgow that blew up on social media this week has been widely blamed on artificial intelligence-generated images that captivated parents into buying tickets.

Instead of the promised professional, immersive experience at Wonka’s chocolate factory depicted in the online pictures, visitors — mostly parents who had taken their children — reported a hastily produced, ill-decorated, chaotic event.

Whether a scam or simply false advertising, the Wonka furore highlights the power of generative AI to fuel a world of deceptive advertisements and media: fueling concerns about its use in spreading misinformation and the potential legal consequences.

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

Calls for AI-generated adverts to be regulated

Sources:  VentureBeat, Statescoop, Milwaukee Journal Sentinel

Following the Willy Wonka incident, social media users called for more guidelines to regulate how AI can be used in advertising, Vulture Beat reported. The issue is also becoming political: after Florida Governor Ron DeSantis used unflattering AI-generated images of former U.S. President Donald Trump in a political ad, state lawmakers introduced a bill that would require political ads generated by AI to be labeled with disclaimers. A similar law in Wisconsin is also being debated, and at least five other states have already passed disclaimer requirements, according to the Milwaukee Journal Sentinel. States are at the forefront of the battleground for AI because the federal government is “going very slow,” on AI regulation, one lobbyist told the Journal.

AI is not necessarily doing anything new

Source:  404 Media

A recent 404 Media investigation dove into how ghost kitchens — delivery restaurants that often rely on other restaurants to make their own food and advertise on apps such as DoorDash and GrubHub — use AI to create the images of their products. “In these cases the ghost kitchens are showing people pictures of food that literally doesn’t exist, and looks nothing like the actual items they’re selling, sometimes because the faulty AI is producing physically impossible food items,” reporter Emanuel Maiberg wrote. Although, he acknowledged, the use of AI is not so different from traditional food advertising that uses meticulous cosmetic techniques before food is photographed for adverts, writing: “A Big Mac in a commercial doesn’t look like what you actually get when you buy one.”

Courts could hold Big Tech accountable for third-party ads that use AI

Sources:  Ad Age, Business Insider

In the United States, Big Tech hasn’t cracked down on the use of AI in advertising because of the 1996 Communications Decency Act’s infamous Section 230 — a law that protects online services from repercussions brought about by third-party content produced and posted by its customers — according to Ad Age, an advertising and marketing publication. But because Big Tech owns and operates much of the already available generative AI programs, any content they produce could be seen as being created by the company, and not the customer — and therefore not covered under Section 230. United States courts have not yet made a definitive ruling on the issue. But experts theorize that courts would consider AI-generated content as being produced by the tech companies that own the services, meaning they hold liability. “Plainly on the face of the statute, generative AI falls outside of it,” one law professor told Business Insider.