Your standard medical drama is supposed to end with a “how it happened” scene, in which doctors explain what really went wrong with the patient and how they solved it. But it doesn’t look like the recent ransomware episode at MedStar Health will get that traditional resolution.
We know from well-sourced reports that the mid-Atlantic hospital chain got hit with a strain of ransomware that locked up some of its files. (In such attacks, miscreants encrypt a victim’s files and demand payment — often in the form of Bitcoin — for the decryption key.) We know that containing the problem knocked many of the hospital’s computer systems offline and forced doctors and nurses to communicate via paper and fax.
But we don’t know how the attack happened or what MedStar did to fix it. And the Columbia, Md., company doesn’t plan to tell us.
“Based on the advice of IT, cybersecurity and law enforcement experts, MedStar will not be elaborating further on additional aspects of this malware event,” reads a statement posted on its site last week. “This is not only for the protection and security of MedStar Health, its patients and associates, but is also for the benefit of other healthcare organizations and companies.”
The sound of cybersecurity silence
MedStar’s case is not unique, and neither is its subsequent silence.
In February, Hollywood Presbyterian Medical Center in Los Angeles suffered its own ransomware attack. The hospital acknowledged that it was ransomware and even specified the sum demanded (40 bitcoin, or about $17,000). But it provided no hint as to how it got hacked or what it has done to thwart future attacks.
Cybersecurity experts know this secure-it-and-shut-up routine well.
“The industry status quo is not to reveal the cause of breaches,” emailed Katie Moussouris, a Washington-based security consultant. “Disclosure often only happens when action must be taken externally to apply the defense” — that is, somebody outside the organization has to change a password, patch a server, or take a system offline.
“I can’t think of any company that’s been transparent about it,” said Ars Technica’s veteran security reporter Sean Gallagher in a Twitter direct message.
It’s not that corporate leaders don’t realize the importance of working with their peers: They do, but still would rather not reveal the ugly details of attacks. A recent survey of 700-plus C-suite executives by IBM Security found that while 55 percent favored more industry collaboration, 68 percent were reluctant to share incident information outside their own firms.
Meanwhile, attackers have fewer hang-ups about talking about their tactics. “The bad guys are always better at sharing than the good guys,” emailed Jeremy Epstein, a security scientist with SRI International.
Different ways to disclose
Other industries aren’t as opaque in documenting their mishaps. For a particularly dramatic contrast, you could look to commercial aviation.
Any serious accident spurs an investigation by the National Transportation Safety Board, and even something as relatively minor as a flight attendant breaking a passenger’s foot with a beverage cart warrants an NTSB writeup. The idea is to publicly identify what went wrong so nobody ever does it again — and it’s made flying an incredibly safe way to travel.
Epstein noted that this culture of safety owes something to government influence: “Airlines have more regulatory requirements to disclose.” In other business sectors, that influence is less pronounced.
But, he added, airlines themselves can still clam up about cybersecurity issues that don’t directly affect flight safety. He cited a run of flight cancellations last year that were apparently the result of fake flight plans that pilots immediately flagged, but which airlines later vaguely labeled as “unanticipated technical problems.”
Companies and organizations are supposed to be able to share confidential information, including details of unpatched vulnerabilities, in private forums such as industry-specific Information Sharing and Analysis Centers. For instance, airlines can team up at the Aviation ISAC, while medical facilities can collaborate privately at Healthcare Ready.
So is MedStar at least documenting what went wrong in that health care forum? The hospital won’t even say that. Said spokeswoman Ann Nickels in a text message: “I have nothing further to add.”
What silence really says
The immediate benefit of disclosure — after you’ve patched your shop and helped peers with equally sensitive systems secure their own — is education for everybody else who might not be in the same line of work but who might be running software with the same vulnerability.
“The best way to educate the public on how to not make the same mistakes is to publicly disclose the cause of a breach,” Moussouris said. But organizations don’t have much motivation to take that first step.
And until more of them do, hopelessly vague cybersecurity storylines imply that hacks just happen — they don’t — and that we must blindly trust large corporations to fix these apparently inevitable problems. That leaves us not just unaware of security flaws that might be lurking on our own computers, but generally powerless in the entire cybersecurity debate.
Moussouris, who has helped organize such collaborative vulnerability-research initiatives as the Defense Department’s “Hack the Pentagon” project, suggested it would take either regulation — “which can be more damaging than helpful in some cases” — or pressure from customers.
But if I or somebody in my family needs urgent care, and the closest hospital is a MedStar facility, am I going to complain about their infosec? Absolutely not. So this problem isn’t going away anytime soon.