How Ro Khanna Plans to Prevent the Next Cambridge Analytica Scandal

For more than a decade, the giants of Silicon Valley have been pumping out products and services that millions of people and companies now use every day: social networks, search engines, two-day shipping on toilet paper. Only recently, however, have Americans become aware of just how much of their privacy they surrendered—sometimes knowingly, sometimes not—by joining this ecosystem of app-centric convenience.

In 2017, for example, Yahoo disclosed that four years earlier, hackers gained access to every single one of its three billion accounts. A 2018 Facebook breach left the data of 50 million users unprotected; in 2019, records of more than 500 million Facebook accounts appeared on a public-facing Amazon cloud server. During the 2016 election cycle, the political consulting firm Cambridge Analytica quietly harvested personal information from millions of Facebook profiles and used it to target unsuspecting users with political ads. At this point, Americans are so fearful of tech companies mismanaging their data—and so used to them giving it away—that a viral Instagram hoax promising to keep your photos out of the company's clutches is convincing enough to fool the U.S. Secretary of Energy.

The dearth of regulatory oversight stems, at least in part, from the speed with which the industry has evolved. But, argues California congressman Ro Khanna, a “technological illiteracy” in Washington plays a role, too. Khanna, a Democrat who represents the district home to a large chunk of Silicon Valley, recalls a recent hearing in which Iowa Republican Steve King asked Google CEO Sundar Pichai to explain how notifications could pop on an iPhone while King’s granddaughter was playing games on it. “And Pichai has to patiently explain that Google doesn't make the iPhone, and that Apple makes the iPhone,” Khanna says. “Why is it acceptable for a sitting member of Congress not to know that?”

Other recent headlines have further thrown into sharp relief the need for more robust regulatory intervention in the tech world. Online communities are radicalizing mass shooters, including the Texas man who targeted Hispanic shoppers when he killed 22 people at an El Paso Wal-Mart earlier this month. Hackers are holding municipal networks hostage for millions of dollars, forcing small-town governments to either pay up or rebuild their systems from scratch. Russian interference in the 2016 election likely helped swing that contest to Donald Trump. The disinformation campaigns of 2020 might be even worse.

Last fall, in an effort to build a framework for a better-regulated Internet, Khanna unveiled what he calls the Internet Bill of Rights—ten legal protections to modernize individual rights for the digital age. Despite what the name might indicate, these rights would apply to any company whose business involves the collection of consumer data, and not only against tech giants like Facebook, Google, and Twitter. The proposal includes the right to know about “all collection and uses of personal data by companies,” and to be notified “in a timely manner when a security breach or unauthorized access of personal data is discovered.” Other items on his list would strengthen people's ability to correct or delete personal data in a company's control, and require companies to obtain consumer consent before collecting or sharing data with third parties.

I recently spoke with Khanna about the Internet Bill of Rights, the cultural impact of social media, and the future of laws and regulations designed to keep private data private. An edited and condensed transcript of our conversation follows.


GQ: If you could pick one item from the Internet Bill of Rights to pass by itself, which one would it be?

Congressman Ro Khanna: The right to know about collection and use of your data. That would have prevented the Cambridge Analytica scandal. Now, most people would not have exercised the right right away. But I'm confident that enough activists would have noticed—they would have put in requests for their data, flagged the suspicious activity, and forced Facebook to track it. At the bare minimum, I think people should have the right to know.

Even if you had two billion users still wanting to use Facebook, the stories on Cambridge Analytica would have been written four months before the election, not four months after. The right to know will give journalists the ability to hold companies accountable, even if it doesn't dramatically alter consumer preferences.

What are the areas of agreement with your Republican colleagues about tech issues?

I’m working with [House minority leader] Kevin McCarthy on a separate project—we're very, very close to legislation that would deal with the election interference that took place on technology platforms. We are calling for an “Information Sharing and Analysis Center” that would make it easier for tech companies to exchange information about bad actors. Right now, if Facebook catches a user trying to interfere in an election, they can't necessarily inform Twitter or Google or other social media companies or the government. We want to make sure they can tackle election interference in the same way banks can share information about fraudulent accounts.

There is a consortium right now, but it's not very well-formed. [Ed. note: After the Christchurch mosque shootings in March, some tech companies cooperated to take down footage of the shooter’s livestream.] Our bill changes the law to make sharing easier. If Facebook identifies someone on the platform as interfering in an election or spreading disinformation, there would be a presumption of acceptability—a safe harbor—for them to share that user’s information and flag them as problematic without it being a privacy violation.

If privacy regulations interfere with tech companies' ability to monetize consumer data, how do you anticipating them shifting their business models?

I don't think monetization of the consumer always has to be the end goal. I think you could have other models for revenue that don't require relying on individual data.

Regulations in the [European Union's General Data Protection Regulation], for example, relate to the anonymization of data. So if Facebook wanted to understand advertising patterns, they could give data to a third party, who would give that data back to them and say, “Okay, yeah, for a sunscreen, you want to advertise it to people who are over 65 and have a family with older kids.” But Facebook wouldn’t be able to do that analysis on the individual. I think those types of regulations would make sure companies have to look for alternative sources of revenue based on the product they’re selling, not based on their use of data.

How would such a regulation affect companies like Facebook and Twitter, which for the most part have not gotten into the business of selling products so much as selling ads?

Maybe their ads would be less precise, right? I mean, they are selling a product to consumers—the product is that they're connecting you with your extended network of family and friends. I probably wouldn't be in touch with half my high school class, but I am, marginally, because of Facebook.

My point is that if you forced the anonymization of data, you could still have Facebook make money—they could still show me ads. I don't think anyone would object to Facebook selling ads or having ads directed at me, as long as people didn't think those ads were manipulated by personal data. These companies could still do very well without going to the extreme of micro-targeting to maximize revenue.

The objection they would probably have is that the ability to micro-target is their value-add, and the more general ads get, the less they're going to be able to sell them for.

Between Facebook and Google, they've got 60% of the ad market. They're doing fine. I mean, people would still rather put ads on Facebook or Google than on other digital platforms that don’t get as many eyeballs. I would argue to them that they should focus on maximizing the value people get out of their content. Then, even if the ads aren't as precisely targeted, they're still going to do fine. I'm not concerned about their market share anytime soon.

How do you provide incentives for tech companies to keep bad users—for example, those who traffic in hate speech or misinformation—off their platforms? Reductions in their user base, by definition, are going to be bad for them. How do you nudge them to act in the public interest?

You require them to make massive investments in artificial intelligence and human reviewers, and to have a strict standard that any content that's inciting violence or that is clear hate speech shouldn't belong on those platforms. That's an easy call.

I think regulation of misinformation is a much harder call. I think you have to have a school of social media ethics, just like you have journalistic ethics. If I told you that during my time in Congress, I had solved the issue of misinformation in my district—which I haven't, and I'm not saying that—but if I did, you would say, "Let me get two other sources, and not just take this guy's word for it.” Why? Not because you're worried about being sued by my potential political opponent. Not because of a legal regulation. But because of your sense of ethics, and what it means to be a good journalist.

New media companies have to have some sense of their ethical responsibilities in a democracy, too. If a post has a certain degree of virality and is clearly false, maybe they would display alternative information or a disclaimer. I believe there have to be new media journalism schools that emerge at places like Columbia and Northwestern and others to tackle these questions.

In the aggregate, has social media—in this largely unregulated, unchecked form—been a net positive or a net negative?

I believe it's a net positive. I would argue that for ten years, technology platforms did an extraordinary amount of good. Black Lives Matter and #MeToo gained traction on social media. The Arab Spring gained traction on social media. Barack Obama and Bernie Sanders emerged through social media. I don’t believe we would've had nearly as diverse a Congress if it weren't for social media. I don't think that there would be the same appreciation or empathy for human rights across the world if it weren't for social media.

On the flip side, you could say Donald Trump wouldn't have emerged but for social media, depending on your perspective. And of course, now we’ve seen election interference, radicalization, polarization, hate speech, voter suppression, even mass shootings. We’ve seen the dark side.

I guess the point is that technology reflects humanity. It has the potential for great good, and to democratize access to communication. And it has the potential for great danger, as criminal minds have caught up and are manipulating these platforms for their purposes. Technology is amoral, but it requires humanistic values to steer it in a way that's empowering, and not detrimental to social progress. It's up to us to maximize the good and minimize the bad.


A conversation with MSNBC’s Ari Melber about Facebook’s uphill battle and the case for regulation.

Originally Appeared on GQ