Below is Steven Chabinsky’s presentation from the American Center for Democracy’s symposium on “Energy, Space and Cyber Security-Current and Future Threats,” on September 30, 2013. Chabinsky focuses on the variety of cyber vulnerability and on deterrence as opposed to defense.
Cyber Survival: Why We’re Losing
and What’s Needed to Win
by Steven Chabinsky*
Cyber security is not just about the computer on your desk, or even the remote computer sitting somewhere in what we now call the cloud. A different way of looking at it is to consider cyber security an issue that concerns any technology that has a computer chip in it. Cyber security issues extend to information and information systems, and increasingly they extend to products and services we use in our day-to-day lives. We are facing a technology issue in which similar vulnerabilities exist to your information as they do, for example, to the new generation of biomedical implant devices that allow for remote diagnostics.
When we think about the harms that can befall our information, information systems, products and services, we typically categorize them into categories involving risk to their confidentiality, integrity, and availability. Everyday in the newspapers we read about harms to confidentiality. Everyday someone’s online data is compromised and corporate trade secrets stolen. But, that’s not what keeps most people up at night.
Rather, the possibility of having integrity problems, where you cannot trust the data that you’re seeing, is a far greater problem. The idea that you could alter perceptions through technology is the digital equivalent of the Mission Impossible movie where a security camera is in the corner of a room, but the night watchman is deceived by the spy who created a picture of the room empty, put it at the right focal length in front of the camera, and then went on to do anything in the room he wanted.
The cyber equivalent is happening now. Indeed, it happened ten years ago to the electric power grid, when software failures in an Ohio operations center resulted in computer screens that never updated to reflect the developing, and increasingly bleak, situation. As far as the control room was concerned, everything was great. Meanwhile, there was a rolling blackout and the Midwest witnessed the shut down of over 250 power plants that included 10 nuclear power stations. So, you might be inclined to say, “but that wasn’t from a hacker, I remember it was merely a computer glitch.” You would be right. Still, I’m reminded of the saying that anything that can happen by accident can happen on purpose. In other words, just because this particular example was accidental, don’t feel a false sense of hope that the next time it won’t be intentional and calculated to result in maximum harm.
In addition to crimes against confidentiality and integrity, we are concerned with issues of availability. Talks about availability tend to focus on Distributed Denial of Service, or DDoS, attacks, the idea that somebody is sending so much traffic to a website or server that nobody can access it. Worse yet, though, you might have seen what happened last year to Saudi Aramco, the most valuable company in the world, which reportedly fell victim to a malware infection that purposefully destroyed 30,000 of their computers. Yes, thirty thousand.
As you can see, cyber security concerns extend beyond someone viewing your personal information. The big-ticket items involve information and technology that is rendered unreliable, untrusted, and left irreplaceably in ruins. As to these issues, Bill Forstchen’s novel, One Second After must be considered one of the most significant works of our time. In it, we are exposed to the nightmares of what happens when technology is no longer available to us. One of the most remarkable aspects of the novel in my view, the core of its brilliance, is that it is set in a small town, an area that is rural and not densely populated, where you would consider it most likely that people can survive without technology. Yet, even there we find utter chaos, confusion, and death. You can only extrapolate from that small town to imagine what is happening in the major cities.
And so, when I hear people talk about a cyber 9/11, or a cyber Pearl Harbor, I’m quite dismissive of those as being appropriate analogies. Instead, what I believe is that we very much might face the equivalent of a cyber Katrina. Where we don’t have resources, we don’t have potable water, we don’t have electricity. What we have are all of the cascading harms that are reflected in Bill Fortschen’s writings, which are every bit or more as devastating as planes with bombs or planes as bombs. These effects are real possibilities, and nations recognize it. Only a couple of years ago, the China Youth Daily featured an article expressing, “Just as nuclear warfare was the strategic war of the industrial era, cyber-warfare has become the strategic war of the information era, and this has become a form of battle that is massively destructive and concerns the life and death of nations.”
Non-nuclear electromagnetic pulse is certainly an emerging threat against availability and, as a result, an emerging risk to our very way of life. I greatly appreciate the efforts of the American Center for Democracy in bringing thought leadership and emphasis to this important topic. Of more immediate concern, however, may be EMP’s baby brother, “purposeful interference,” more commonly known as jamming. We already are seeing people with $25 illegal jammers interfere with the electromagnetic spectrum, most commonly focused on impeding mobile communications. Think about a situation that requires emergency responders to talk with each other, perhaps an active shooter scenario, hindered through purposeful interference.
We are only now beginning to understand how reliant we have become on wireless devices. But, it’s not just about your phone calls, although it certainly includes those. It’s not just about being able to check your email, although it includes that as well. In addition, it may be about critical infrastructure and the ability, for example, to change train tracks through wireless communications. And then we have GPS. When people think about GPS they immediately think about positioning and navigation. But an additional feature of GPS that we’ve grown increasingly reliant upon is its timing signal. And so, if you could interfere with GPS, the timing elements that we’ve relied upon for interoperability and synchronization of networked systems could be rendered inadequate, if not entirely useless.
Stepping back for a moment, we are forced to take in the entire picture of how vulnerable all of our data and systems are, how they can impact our critical infrastructure, our privacy, and even our personal health. On top of that, we must consider the world economy. Everybody knows that our economy no longer runs on a gold standard. There’s no precious metal that reflects every dollar we have. However, what most people don’t stop to consider is that there is no physical dollar that represents every dollar we have. At the end of the day, these are mostly accounting entries that get rationalized in the trillions of dollars, and the integrity of that data is what makes up the world’s economy.
Yet, despite our increasing reliance upon data integrity and security, our culture has created a demand for products and services that are quick to market without resilience, or reliability, or secondary systems in place should our new, untested ways fail. This is quite serious, and I appreciate the opportunity to discuss this with everyone here in order to focus our mutual efforts on improved security.
[Rachel Ehrenfeld: What do you think can be done?]
I think that there are solution sets. One thing, I believe, is that we have failed in a meaningful way to exercise common enterprise risk management principles in this area. We tend to treat the entire Internet and our technologies as needing to share a common environment. It is almost as though we think everyone needs the same levels of privacy and security, and as a result that everyone should use the same Internet protocols and standards for interoperability. This is quite preposterous. When I go to the gas station, I can’t use a diesel pump to put gas in my regular car. The nozzle simply won’t fit. But when I was working at the FBI, I had an unclassified computer, a secret computer, and a top-secret computer, and I could use the same thumb drive to move data back and forth between all of them (although I didn’t). The computers were differentiated only by the stickers we put on them, indicating their classification levels. The computers themselves were the same computers that are available to you in any common consumer store. So that’s the first thing. That has to change. We’ve got to figure out that there are different priorities and that our security posture needs to be different depending on those priorities.
The second thing is, you cannot have meaningful security without meaningful threat deterrence unless we all decide to live in a bunker. It’s just not a possibility. When you think through the risk model, you only have three levers to work from. You could lower the threat, you can lower the vulnerability, or you can lower the consequences. That’s what you get to play with; those are your opportunities. We have seen the almost tunnel-like focus on vulnerability mitigation over the past 15 years. It is impossible to create software and hardware that is interoperable, impenetrable, and iterative. That is as absurd, or actually more absurd, than thinking of creating physical environments where communities are impervious to intentional attack. It is not in any way, shape, or form a possibility. It is even worse, I would postulate, in the technology area because it’s less static than a building. Technology is dynamic; it is constantly evolving with new software, new hardware, and new applications, with each one being quicker to market than the earlier version.
What you see as a result of this is that vulnerability mitigation has worked best in the area of reducing cyber crimes of opportunity, and even then it has serious limitations. We patch our systems, we update our software, and as a result the common criminal doesn’t break into those better-protected systems. They break into the systems that haven’t done that. That’s the same as in the real world. If someone just wants a TV, and your house has the door locked, they don’t go to your house; they go to the one that doesn’t have the door locked. Now, query for a second if everybody locked their doors what would happen? You would see a shift. Burglars would start going through windows, and vulnerability mitigation practices would repeat themselves in that context. In essence, best practices would be raised to protect doors and windows.
Obviously there’s a point where vulnerability mitigation efforts need to stop. We don’t start first with locks on doors, then with locks on doors and windows, then with bars on doors and windows, and then with underground bunkers. That’s not how it works. Instead, we immediately shift to threat deterrence once standard vulnerability mitigation opportunities are no longer cost effective. We put up alarms, we put up video cameras, and those basically say to the adversary: we concede the ground, but now it’s no longer about us. It’s about you. You can get in, but now we’re going to detect you, we’re going to find you, and you will suffer a penalty. It won’t be worth it for you.
Could you imagine if in your place of business the alarm went off at 3:00 in the morning, and the monitoring company calls you. And they say: someone just broke through the front door of your place of business, but don’t be concerned we have the locksmith on the way. How absurd, right? We don’t do that. We call the police. And that is the only reason why burglars don’t like to rob places that have alarm systems. It’s not the noise that bothers them.
Yet, every day, tens of thousands of times a day, across this country we have enemies who are trying to break into our critical infrastructure, into our military institutions, and the response has been to tell the chief information security officer: Make sure you’re continuously monitoring to patch your systems. It doesn’t work, it won’t work, it will never work. So the next strategic opportunity is after we figure out what’s important, to make sure that we build the software, hardware, and protocols necessary for detection, attribution, and penalty based deterrence.
There are opportunities here that, I think, actually are a happy coincidence. I would suggest that in a lot of areas where security is the most needed, privacy rights are actually not the most necessary. Take the electric power grid, for example. The electric power grid is a high security system in which the owners and operators do not want or need anonymity. No one who isn’t authorized should be touching those systems. The owners, operators, and employees of an electric power company want perfect attribution. So that’s an area that’s ripe for new software, new hardware, new security policies, and less interoperability, all of which should add up to say to would-be attackers: if you are found in our infrastructure (and you will be, because we have designed this system for detection and attribution), there will be penalties.
So, I think there are opportunities, but the first step is to distinguish what we need to protect most, to build in proper threat deterrent models that promote detection and attribution consistent with privacy demands, and then to ensure that policies and resources are in place that will make the possibility of our adversaries being brought to justice a reality.