Is the NSA Hiding Another Heartbleed?

Is the NSA Hiding Another Heartbleed?

The debate over when, if ever, the U.S. government should be allowed to secretly exploit computer vulnerabilities for offensive spying has reignited in the aftermath of Heartbleed, the disastrous security hole that caused chaos across the web last month.

The incident revealed that nearly two-thirds of all servers on the Internet were vulnerable to attack for nearly two years through a flaw in the popular OpenSSL software, causing many to reflexively eye the NSA with suspicion. When Bloomberg News reported that the spy agency had exploited the bug in secret, the Office of the Director of National Intelligence quickly put out a statement denying that the intelligence community knew about the flaw.

READ MORE The Little Boy Rebuilding West, Texas

It went on to say that whenever the NSA discovers a previously unknown security bug, commonly known as a “zero-day,” “it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose.” But the administration also carved out an all-too-familiar exception for whenever “there is a clear national security or law enforcement need.”

Heartbleed was truly a lose-lose situation for the NSA. If the agency's analysts did know about the bug, they'd be nefarious; if they didn't know about it, they'd be incompetent.

READ MORE Stat-Happy News Ignores Narrative

Regardless of whether you think the NSA is lying or telling the truth, the statements highlight the paradox of an agency with two fundamentally conflicting roles. On the one hand, you have the Information Assurance Directorate, the part of the NSA responsible for protecting national computer networks by discovering and patching security holes. On the other, you have the Signals Intelligence Directorate, including the agency's secret army of elite hackers, whose job it is to find and exploit those very same vulnerabilities for the purpose of breaking into systems and intercepting communications. Stealing sensitive data, installing remote access tools, and gaining live access to a computer's webcam or microphone are just a few of the ways that a zero-day can be utilized.

The problem is you can't have it both ways. Unlike the lexicon of conventional warfare, there's no such thing as “unilateral” or “bilateral” in computer security: a software bug is either patched or it's exploitable to anyone who knows about it. So the longer an intelligence agency like the NSA holds onto a bug without giving the vendor a chance to fix it, the chances that someone else will discover and exploit it—a rival nation-state or a Russian cybercrime gang, for example—approach near certainty. And if the scales are tipped too much in favor of offense, as the Snowden leaks seem to suggest, the security systems which consumers, businesses, and even the U.S. government itself depends on are put at risk.

READ MORE North Korea: ‘The U.S. Is a Living Hell’

Heartbleed has forced the Obama administration to speak up on the issue of computer vulnerabilities, but so far the reassurances have boiled down to the familiar “trust us” ultimatum that has been a mainstay of NSA's PR since the Snowden revelations began.

On Monday, White House cybersecurity coordinator Michael Daniel issued a statement saying that while there are “no hard and fast rules,” the administration does believe that “in the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” The statement offers a list of criteria, such as “how badly” intelligence that would be gleaned by exploiting a flaw is needed, and whether the vulnerability would a be “a significant risk if left unpatched.” How closely those recommendations reflect actual policy, however, is unclear.

READ MORE The Heiress Who Brought Harry to Memphis

Absent that, it's difficult to tell what kind of ethical calculus U.S. intelligence agencies employ when deciding whether to attack or disclose. In the statement issued by the ODNI after Heartbleed, the decision-making process was referred to as the “Vulnerabilities Equities Process,” but officials have offered few details on its guidelines or to what extent it is subject to external oversight.

While answering a Senate questionnaire in March, incoming NSA director Admiral Mike Rogers described the procedure as “a mature and efficient equities resolution process,” and said that “NSA will attempt to find other ways to mitigate the risks” when it decides to withhold a zero-day. A Freedom of Information Act request has been filed for documents describing the Vulnerabilities Equities Process, and is currently pending.

READ MORE Pigs: The New Magazine Cover Stars

People familiar with the process of finding computer vulnerabilities cite a kind of litmus test known in the spook world as “Nobody But Us,” or “NOBUS.” The logic of NOBUS is essentially security by obscurity: holding bugs and backdoors is okay, as long as the agency is confident that “nobody but us” knows about them, or has the technical capability to crack the necessary codes (using a fleet of high-powered supercomputers, for example). The strategy is controversial, to say the least—especially now that many security experts find the idea of trusting NSA as “the good guys” hard to swallow. Expert cryptographer and security guru Bruce Schneier has warned that it's “sheer folly to believe that only the NSA can exploit the vulnerabilities they create.”

In addition to finding its own software flaws, the U.S. government is also the biggest buyer in a thriving gray market for zero-day vulnerabilities. The NSA itself spent $25.1 million last year on “additional covert purchases of software vulnerabilities” from private companies, including French exploit vendor VUPEN, which delivers fresh zero-days to the agency and others through its “binary analysis and exploits service.” Since they come from commercial third parties, critics find it extremely unlikely these flaws would pass the “NOBUS” test.

READ MORE L.A. NAACP Head Quits Over Sterling

But assuming the coin flips to the side of disclosure, what happens to a newly discovered software bug then?

Presumably the software's vendors are notified, though how and how often this actually occurs is unclear—sometimes even to the companies themselves. Several security engineers at major software companies said that notifications from government bodies like the Department of Homeland Security's Computer Emergency Readiness Team could theoretically be the product of NSA bug discoveries, but it's virtually impossible to know for sure on a case-by-case basis.

READ MORE Teen Girls Deny Kitten Torture

“Given that [intelligence agencies] deliberately go out of their way to obscure the source of the notification, it's difficult to identify a specific case where this has happened,” said Morgan Marquis-Boire, a senior security engineer at Google.

The NSA might gain additional latitude when the software it wants to exploit is no longer supported by its parent company. Shortly after Microsoft finally ended its support for Windows XP last month, a new zero-day vulnerability was found affecting the 12-year-old operating system. In other words, a major security flaw was affecting widely used software that Microsoft had no obligation to fix. (Despite its age, XP still enjoys a 27 percent market share.)

READ MORE Economy Adds 288,000 Jobs

Fortunately the company decided to make an exception this time and released a patch for XP users on Thursday. But as time goes on, the chance that companies will extend this courtesy for future bugs in legacy products becomes incredibly slim. And without transparency on intelligence agencies' disclosure policies, that means the decisions being made behind the black soundproof glass of Fort Meade could have a greater impact than before.

Related from The Daily Beast

Like us on Facebook - Follow us on Twitter - Sign up for The Cheat Sheet Newsletter