Biden’s Elusive AI Whisperer Finally Goes On the Record. Here's His Warning.

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

This spring, Americans were beginning to freak out about the coming AI revolution. The generative AI tool ChatGPT was sparking fears about plagiarism and job losses, and recently, Bing’s AI-powered chatbot had recently tried to convince a New York Times writer to leave his wife.

Bruce Reed, White House deputy chief of staff and longtime Democratic Party policy whiz, was sitting in his West Wing office and starting to think maybe people weren’t freaking out enough.

Reed, who had been charged by President Joe Biden and White House chief of staff Jeff Zients with developing his administration’s AI strategy, hosted three experts in the White House: Tristan Harris, a former Google design ethicist, and his colleagues at the Center for Humane Technology, a non-profit Reed had helped set up back in 2018. Social media sites like Facebook and TikTok, they argued, had long used artificial-intelligence algorithms to manipulate users’ attention. Now, they said, new generative tools like ChatGPT would soon be capable of exploiting holes in computer code in an instant and mimicking human voices using just three seconds of speech. “When society had our first battle with AI over social media,” Harris told Reed, according to one meeting participant who was granted anonymity because the person did not want to be quoted disclosing details of a private meeting, “we lost.” The second battle was just beginning.

The meeting, Reed says, hardened his belief that generative AI is poised to shake the very foundations of American life. “What we’re going to have to prepare for, and guard against,” Reed says, “is the potential impact of AI on our ability to tell what’s real and what’s not.” Reed agreed to two interviews for this piece, one in his office and one by phone, to discuss tech policy and AI. These are rare extended interviews from Reed, who has been one of the Democratic Party’s most consequential policy minds for decades but who is reluctant to talk to the press.

You can see Reed’s worry in the executive order the White House released Monday and whose creation he led. The order establishes new guidelines for AI safety, including standards for new models and for watermarking content so users can tell when it’s been AI-generated. AI skeptics cheered the document’s ambition and scope. The executive order is the most comprehensive attempt yet to regulate this new industry in the United States.

The White House’s AI strategy also reflects a big mindset shift in the Democratic Party, which had for years celebrated the American tech industry. Underlying it is Biden’s and Reed’s belief that Big Tech has become arrogant about its alleged positive impact on the world and insulated by a compliant Washington from the consequences of the resulting damage. While both say they’re optimistic about the potential of AI development, they’re also launching a big effort to bring those tech leaders to heel.

Reed is a surprising instrument of this transformation. For much of his career, he’s been aligned with a moderate faction of the Democratic Party that tended to befriend Big Tech and distance itself from the anti-business rhetoric common in more liberal corners. In the ‘90s, he was an architect of controversial centrist Clinton administration policies, including welfare reform and the 1994 crime bill. Now, at 63, Reed finds himself on the same side as many of his longtime skeptics as he has become a tough-on-tech crusader, in favor of a massive assertion of government power against business.

On Reed’s desk is an eye-catching mug. “Wu & Khan & Kanter,” it reads. These mugs were made by Big Tech critics early in the Biden administration as a plea to the White House to hire antitrust advocates Tim Wu, Lina Khan and Jonathan Kanter, all of whom were eager to go toe-to-toe with Silicon Valley’s most powerful players. In part with Reed’s help, each of them found perches within the Biden administration in its earliest days. The mug’s a signal: This White House is going after Big Tech.

Reed, according to Alondra Nelson, a prominent sociologist who reported to him while heading the White House’s science and technology policy office, is Biden’s “avatar” on tech. When it comes to AI, Reed says the president “sees huge potential benefits” — from creating sophisticated models for extreme weather events to helping detect and cure cancer — but also “huge potential risks.” Reed says the president “is skeptical of what Big Tech has done so far, particularly on social media, and has never liked the fact that they have legal immunity [under federal communications law] from the consequences of what they do. He wants to make sure ordinary Americans see the benefits” of AI, “not suffer all the harms.”

Reed is fueled by a chance to get it right this time. Twenty years ago, hardly anyone could predict just how huge and far-reaching the effects of social media could be. Today, everyone seems to understand that the consequences of AI will be massive, even if they are just as undefined at this point as they were in the beginning of social media. To Reed, that means a huge opportunity — and a window that is closing by the day.

“It’s about time that policymakers try to keep up,” says Reed. “We can’t just sit back and watch to see how this all turns out.”


Reed’s career in the Democratic Party began inside the group designed to push it toward the center: The Democratic Leadership Council, a non-profit organization and network of politicians that fought for balanced budgets, free trade and the adoption of a “tough-on-crime” approach.

Reed came to the DLC as policy director after writing speeches for then-Sen. Al Gore and working on his doomed 1988 presidential run. “I was so impressed with Bruce Reed on the Gore campaign that I never even asked him for a résumé,” says Al From, the DLC’s founder.

Like other Democrats, the group actively cultivated Silicon Valley; the region, then just starting to flourish, was a testing ground for the group’s innovation-first, grow-the-pie agenda and a good source of necessary cash for an upstart organization then locked in battle with the well-established Democratic National Committee for power over the party’s direction.

Arkansas Governor Bill Clinton became chair of the DLC in 1990, and when he made a long-shot bid for the White House in the ’92 campaign, From recalls, “he said to me, I want a DLC person on the [campaign] plane, and I’d like it to be Bruce.”

Reed first landed in the White House in 1993. As Clinton’s deputy domestic policy advisor, and later as director of the domestic policy council, Reed tackled big centrist policies like welfare reform and managed to get V-chips — controversial devices capable of blocking the television shows kids watched — mentioned in Clinton’s 1996 State of the Union. (“When parents control what their young children see, that is not censorship.”) He didn’t, though, have much to do with policymaking that would later come to shape his career: a Clinton-era law that would both deregulate the networks that made up the Internet while limiting online platforms’ liability for what users post to them, a provision known as Section 230.

After Clinton’s two terms, Reed returned to the DLC, eventually taking over for From.

Meanwhile, the kids he had in the early days of the administration were growing up — as was Silicon Valley, and Reed began to fret about both. He and his wife set limits on screen time. Neopets was OK. Facebook, which launched in 2004, wasn’t. He encouraged his younger kid to think of doing his homework sessions as being in a “Sensitive Compartmented Information Facility,” a.k.a. a SCIF, and lock his phone away. He worried about isolation and came to see the online world as far harsher, and more extremist, than its offline counterpart.

A few years later, Silicon Valley was booming, and the American economy wasn’t. After Barack Obama became president, Reed was put in charge of chairing the so-called Simpson Bowles Commission, aimed at cutting the national deficit, including by, just maybe, cutting social projects. It didn’t work: The commission couldn’t agree on its recommendations, and in 2011, Reed joined the Obama White House as then-Vice President Joe Biden’s chief of staff.

Obamaland saw the Internet as a democratizing force, and tech entrepreneurs also saw Obama as an ally; one Google employee was recorded saying of then-candidate Obama upon his visit to the company’s Mountain View headquarters, “He’s fresh, he’s new, there’s something about him that’s Google-like.” At the time, the tech industry was widely embraced by the political class: Facebook CEO Mark Zuckerberg moderated a town hall with President Obama, Google employees actively advised the White House on tech and economic policy, and top Obama administration officials took post-government jobs at Uber, Apple and Airbnb.

Reed had limited tech policy influence in his post working for the VP, where he championed gun-control measures. He and Biden, though, took trips to Silicon Valley, and, Reed says, he came away finding its leaders dismissive of concerns about their impact on the world.

Alarmed about what he saw as the effect of social media on the public landscape — a coarsening of politics and media, a further isolation of people young and old — Reed pushed for a mention of kids’ privacy in Obama’s State of the Union without much success.

A few years after he left the White House, Common Sense Media CEO Jim Steyer invited him to work for his organization. Common Sense began its work as a nonprofit rating media for its suitability for kids and grew — in large part through Steyer’s relentlessness and networking — into a powerful, well-connected group that the New York Times once called “an advocacy army” and “political dynamo.” As Common Sense sees it, social media companies manipulate young people, harming their self-esteem in pursuit of profit.

While working as an adviser to Common Sense, Reed says, he came to think it was possible for advocates and government to compel tech companies to write new rules for their industry.

In the summer of 2018 Bob Hertzberg, a major player in California politics and then a Democratic state senator from Van Nuys, turned to Reed for help with solving a huge political headache.

A San Francisco real estate developer named Alastair Mactaggart had poured millions into a ballot initiative allowing Californians to protect their personal data from Silicon Valley’s biggest companies like Google and Facebook that had built multi-billion-dollar empires on harvesting and monetizing it.

Hertzberg hated the end run around his beloved state legislature. But he understood why it had happened: California lawmakers had struggled to answer the public’s call for politicians to do something to guard their privacy in online spaces.

So he needed a unicorn of a bill that could appease Mactaggart while not becoming a must-kill measure for Silicon Valley and its lobbyists. He asked Reed for help.

Reed found slivers of common policy ground. Mactaggart says as he scrambled to figure out a key sticking point over individuals’ ability to sue over data harms, Reed puzzled out that Apple might be game to negotiate. It ended up being just the “thin edge of the wedge” he needed, Mactaggart says. But in the end, success or failure would come down to the words on the page, and there, too, was Reed. Hertzberg recalls walking out of his office in Sacramento at three or four in the morning to find Reed hunched over a borrowed desk, scrubbing the bill’s language.

Hertzberg recalls: “He was all, ‘We gotta get this right. We gotta get it right.’”

California’s landmark privacy bill passed the Senate, then the Assembly, and at just about the last possible moment, on June 28, 2018, was signed by then-Gov. Jerry Brown.

On the one hand, few were thrilled with the result; privacy advocates judged it inexcusably weak, and the tech industry found it unworkably overreaching. On the other, it remains the most sweeping set of rules ever passed on privacy in the United States — a country that has for decades struggled to do much of anything to contend with the Internet age. “Bruce,” says Steyer, “is sometimes vilified by people who don’t ever pass anything.”

In October 2020, Steyer and Common Sense published a compilation of essays called Which Side of History?: How Technology is Reshaping Democracy and Our Lives. Steyer and Reed co-bylined perhaps its bloodiest essay.

Silicon Valley’s CEOs were utterly irresponsible, their argument went, and as a result the Internet had turned “nasty, brutal, and lawless.” The essay not only pushed for the revocation of Section 230 — a call Biden himself had made in broad strokes on the campaign trail — but detailed a full repeal-and-replace plan for it. Reprinted in the then D.C.-focused tech publication Protocol (which was owned at the time by the then-owner of POLITICO) under the headline “Why Section 230 hurts kids, and what to do about it,” it was a bold statement when it was still unclear how President-elect Biden would translate his tech skepticism into policy — and where Reed would slot into his administration.

For fans of the tech industry, the rhetoric was more than bold — it was alarming. “Biden’s Top Tech Advisor Trots Out Dangerous Ideas For ‘Reforming’ Section 230,” was the headline of one post on the influential pro-innovation blog TechDirt, by its editor, Mike Masnick, a regular commentator on legal questions facing the tech industry. “That this is coming from Biden’s top tech advisor is downright scary. It is as destructive as it is ignorant.”

Meanwhile, on the left, Reed’s old reputation as a Clinton-era business-first Democrat was still intact. His name was floated for White House chief of staff and director of the Office of Management and Budget. Liberals balked. “Mr. Austerity,” the left-of-center American Prospect magazine derided him for his work on Simpson-Bowles. The Justice Democrats, a group aligned with Rep. Alexandria Ocasio-Cortez (D-N.Y.), circulated a petition saying "Rejecting Reed will be a major test for the fight for the soul of the Biden presidency.”

Reed got the deputy chief of staff job instead, freeing him both to travel with Biden, which he does frequently, and to dig in on policy. (The Biden White House has three deputy chiefs of staff; the other two focus on operations and implementation of passed legislation.)

Tim Wu, a prominent antitrust expert who was given a role overseeing competition policy in the White House in early 2021, recalls wandering through the West Wing in the administration’s early days, looking for Reed’s office. “I was a little nervous,” recalls Wu, a Columbia law professor and author of the 2018 book, The Curse of Bigness: Antitrust in the New Gilded Age.

It wasn’t just that Wu was lost. (With no nameplates on the doors, he stumbled into one room where Biden was on a video conference.) It was that while Candidate Biden had talked tough on tech on the campaign trail, Wu wasn’t sure how real that’d be for President Biden.

In their first meeting, Reed took out a piece of embossed cardstock and scribbled a list. “TECH POLICY,” went the header, and below it he listed out sticky problems: platform accountability, antitrust, so-called net neutrality, and so on. “Let’s see what progress we can make on these,” Wu remembers Reed saying, handing him the card.

The last item on the list: AI.

“Bruce, from the beginning, was serious about trying to do everything we could to restrain the excessive power of Big Tech,” Wu says.


In the summer of 2022, it looked like, just maybe, after 245 years, the United States Congress was finally getting ready to write comprehensive rules governing privacy. But it would be an enormously heavy lift. Republicans were insisting that any bill supersede the California law Reed had helped author, to give, they argued, the business community only one law to comply with. But some Democrats — especially Californians, including then-Speaker Nancy Pelosi — balked at the idea of preemptively tossing their state law in exchange for a federal law that could end up far weaker.

“This should be like pushing on an open door,” Wu recalls Reed saying, of the country coming up with some sort of baseline for what it expects when it comes to Americans’ personal data. But in Washington, few doors swing easily.

At the time, the White House balked at attempting to brute force the hands of congressional leaders it needed at that moment on other policy priorities, especially post-Covid and amid attempts to shore up the rattled country. “Congress is the world’s greatest football-snatcher-awayer,” says Wu, saying Reed took a pragmatic approach to cutting losses once it became clear Congress wasn’t going to play ball: “He was a legislative realist.”

By the end of 2022, it was clear the privacy bill had, like others before it, failed, leaving a gap in the Biden administration’s emerging tech record. But it arguably had a hidden upside: It cemented the idea that Congress was incapable of moving at the pace of the Internet, shifting the center of gravity on tech policymaking to the White House and its powers which, while limited, were at least unilateral.

And Reed was the person trusted to wield that power.


When Alondra Nelson, who Biden named to lead the Office of Science and Tech Policy in 2022, arrived at the White House, she, under Reed’s guidance, began working on what they would end up calling a “Blueprint for an AI Bill of Rights.”

Nelson said that through the months-long process of working on the document, she came to realize that success in Biden’s White House, and in Reed’s office, meant embracing a straight-forward way of speaking about complex topics. The ability to boil policy down to everyday language is, say aides, something Biden values in Reed. “He’s just plain-thinking enough to be truly brilliant,” says an ally of Reed’s speaking without attribution to avoid being seen drawing attention to him, given his success working behind closed doors.

As Reed sees it, it’s especially valuable when dealing with a Silicon Valley that has long found success in Washington by cloaking its endeavors in technical jargon.

“The industry got away with a lot of stuff because ‘It’s complicated to understand,’” says Reed. “And who wants to work on tech policy if you actually have to understand how these microscopic things work? But you don’t. You just have to bring a common-sense view to what is good about it and what’s not, and what we can do — and treat it with the same healthy scrutiny that we do everything else in American life.”

The AI bill of rights draft took months to wend its way through the White House’s approval processes, andwhen it was finally, after much delay, released last October, the final 73-page blueprint was plain-talking, pointing to, for example, algorithms that unfairly decide who gets what credit products — “too often, these tools are used to limit our opportunities and prevent our access to critical resources or services” — and politically savvy. The document takes a lighter-touch approach to law enforcement, arguing that some of the document’s stated principles, like transparency, might need to be interpreted differently in that context — alarming advocates who said that police use of AI tools like facial recognition is among their top concerns.

Reed was, says Nelson, particularly committed to ensuring that AI-powered algorithms don’t exacerbate bias and discrimination — and to telling a story about AI that the American public could understand. “Bruce is the standard bearer of holding together a narrative and a vision,” says Nelson, of AI having tremendous upsides but of the need for government to defend the rights of the American public as artificial intelligence gets ever more entrenched in American life.

One of Reed’s chief worries: that AI can erode trust in a society where people are already unsure who to believe. “Voice cloning is one thing that keeps me up at night,” says Reed. “That technology is still new, but it’s frighteningly good. It hasn’t dawned on society yet how much the notion of perfect voice fakes could upend our lives. No one will answer the phone if they can’t be sure whether the voice on the other side is real or fake.”

Reed goes on: “All of us, just as members of the human race, need to worry about the erosion of trust in our daily lives.”

These days, Reed chairs a regular meeting in the White House’s Roosevelt Room for senior administration officials focused exclusively on AI. Scheduled for three times a week, National Economic Council director Lael Brainard, national security adviser Jake Sullivan, director of the Office of Legislative Affairs Shuwanza Goff, and Office of Science and Technology director Arati Prabhakar are regular attendees; Secretary of Commerce Gina Raimondo has participated. In a meeting of the president’s cabinet this summer, cabinet members played around with ChatGPT; Can you make me a bioweapon? Asked one participant. (It couldn’t. For now.) Biden tasked his cabinet with studying everything from how, exactly, to classify the power of emerging AI models to how the country’s patent law applies to what generative AI dreams up.

In previously unreported remarks in a meeting in early October, Biden told his cabinet that AI would affect the work of every department and agency. “That’s not hyperbole,” said Biden, according to one participant, who was granted anonymity to discuss details of a private meeting. “The rest of the world is looking to us to lead the way.”

Reed is tasked with getting the Biden administration moving in one direction on AI, but it’s not easy. There are, of course, competing interests. The national security world, for example, is fixated on China and Russia’s demonstrated interest in using AI in their military operations — powers that, they worry, could spread to smaller nations and even terrorist groups. Officials responsible for economic policy, meanwhile, balk at the possibility that American tech companies’ AI development will be smothered by various countries’ dictates and demands, inspired in part by the United States’ own attempts to set the rules of the road.

As Reed sees it, it’s all connected. Washington’s failings on social media led to an uptick in everything from online bullying to an increase in digital sex trafficking. Washington failing on AI would mean undercutting the very foundations of American life.

Sure, says Reed, the White House worries about DIY bioweapons. But, he says, “we’re just as worried about what scammers can do.”

“It will be up to government at every level to help educate people to the threat, and also throw the book at those who take advantage of it,” says Reed. “We have to be vigilant.”


ChatGPT 4 premiered in March. In May, the Biden White House convened a meeting with CEOs of four of the companies developing the world’s most advanced AI systems: OpenAI’s Sam Altman, Alphabet’s Sundar Pichai, Microsoft’s Satya Nadella, and Anthropic’s Dario Amodei. The message delivered to the companies: Come back in a month (a blink of an eye for Washington) with ideas for what rules they think should govern the industry.

Working with the companies, Reed’s office cobbled together a document laying out what the firms might agree to. In July, the White House announced thatthey had secured so-called voluntary commitments to address the risks the tools they’re building pose, like committing to third-party testing of their AIs before unleashing them on the public.

Some critics thought the orders were toothless, but administration officials argued that it was what was possible in a month.

On Monday, though, came the executive order. Some of those who’d worried that the White House was taking its eye off the ball on existing harms of AI came away relieved.

In the world of AI, there is a debate what the biggest challenge is. Some think policymakers should try to solve already-known problems like algorithmic bias in job-applicant vetting. Others think policymakers should spend their time trying to prevent seemingly sci-fi existential crises that ever-evolving generative AI might trigger next.

Alexandra Reeve Givens is the president and CEO of the Center for Democracy & Technology, a D.C. non-profit with ties to the tech industry that advocates for the positive use of tech tools. Late this summer, Givens told me that she and allies were working to make sure the Biden White House kept its focus on algorithmic bias and similar ills, “even as generative AI comes in and changes some of that conservation.” I checked in with Givens after the issuance of the executive order. She called herself and her organization “thrilled” to see the order’s highlighting of would-be solutions to current challenges, like calling on the secretary of housing and urban development to investigate how AI screenings could violate federal housing laws.

The order’s success hinges on federal agencies carrying it out. But there are those who think the White House has already set the wrong tone.

Rob Atkinson is today the president of the Information Technology & Innovation Foundation, the tech-friendly D.C. think tank. But back in the late ‘90s and early aughts, he worked closely alongside Reed as the director of the “Technology and New Economy Project” at the Progressive Policy Institute, the think-tank arm of the Democratic Leadership Council that existed to field-test some of the DLC’s new policy thinking.

“We’re in a tech panic cycle,” says Atkinson, and about the Biden White House, he says, “they should be the voice of reason and caution here. They shouldn’t be jumping on the bandwagon.”

Tristan Harris, for his part, would suggest there is a reason for the panic. He argues that generative AI should be thought of as a “hyperobject,” using a term meant to refer to a phenomenon so powerful, so sweeping, as Harris puts it, “it affects everything everywhere all at once.” AI is learning so quickly how to shape the world, Harris argues, that government must jump ahead of it, even if exactly what the technology might do isn’t fully clear yet.

Reed doesn’t think the White House has to choose between the already-existing AI harms of today and the potential AI harms of tomorrow. “My job is to lose sleep over both,” he says. “I think the president shares the view that both sides of the argument are right.”

And, he argues, the tech industry has to be made to address those worries. “The main thing we’re saying is that every company needs to take responsibility for whether the products it brings on to the market are safe,” says Reed, “and that’s not too much to ask.”