Cyber Deterrence

I was reading up on cyber deterrence today and ran across this little gem in relation to nuclear deterrence:

Because of the value that comes from the ambiguity of what the US may do to an adversary if the acts we seek to deter are carried out, it hurts to portray ourselves as too fully rational and cool-headed. The fact that some elements may appear to be potentially “out of control” can be beneficial to creating and reinforcing fears and doubts within the minds of an adversary’s decision makers. This essential sense of fear is the working force of deterrence. That the US may become irrational and vindictive if its vital interests are attacked should be a part of the national persona we project to all adversaries.

–Essentials of Post Cold War Deterrence (1995)

Source: http://www.nukestrat.com/us/stratcom/SAGessentials.PDF

 

Using Economics to Diagnose Security Model Failure

asymmetryMany information security practitioners labor daily to increase security for the organizations in which they work. The task itself seems beset with obstacles. On the one hand, there is the need to acquire security funding from executives that are distracted from security by the sturm und drang of the daily operation of the business, tempered by the need to embed long-term strategy in the hearts and minds of its employees. On the other hand is the near-daily obliviousness of the employees they are instructed to protect. They deal with too many clicks on too many phishing emails, accidentally unencrypted emails with government identification numbers attached, and the ever-present push to increase security awareness amongst a group that, at best recognizes that security has something to do with firewalls and at worst, gets in the way of the business generating the revenue its tasked with acquiring. While such a scenario may seem hopeless, it is perhaps better viewed through the lens of economics. Information security economics drive behaviors, decisions, and attitudes concerning the state of security in an organization.

In the dynamic of internal political battles over security funding as well as operations, it’s easy to overlook the other forces at play. Through the lens of economics, we can reveal additional levers that contribute to the decision-making criteria. One of these is the pervasiveness of asymmetric information. For the average consumer, making decisions that increase security is often very difficult, as they lack two things that can assist them in good decision-making. The first is the domain knowledge necessary to understand what good security looks like. The dynamic between the evolution of controls and the nature and skillsets of attackers appears to shifts daily. It requires nothing less than full time devotion to understand these environmental elements in order to make a fully informed decision, which is clearly more than the average consumer has time to devote. The second is the lack of ability to directly observe the environments they are trying to measure. Because they aren’t employed in the security function of the organizations who are offering them security, they are necessarily withheld clandestine information about said security. Information that is vital in coming to an optimal resolution on the state of security for an organization. Often consumers are left to more readily available, yet misleading, indicators of security. These secondary and tertiary, often latent, factors are more difficult from which to correctly derive an accurate measure of security.

An example of this battle of indirectly observable economic factors plays out in the world of financial services and banking. The average consumer may be notified by a bank that their information was in scope of a recent security breach. Such breach notification letters connote action yet offer assurance that any damages the customer may incur will be handled by an insurance provider. What is the customer to do? Should they follow the advice of the letter, that is, do not take action, just monitor their accounts for fraud and rest assured that the firm whom just lost their data will handle things, or should they move their accounts to another provider? Each customer has their own calculus for how to make these decisions. Some will accept the premise of the letter with an uneasy feeling, yet others will stand on moral outrage and move their financial accounts to another provider. Each decision is not without its drawbacks, however. In the former, the customer has to have assessed that while security failed once, it likely won’t again and that if it did fail, the coverage offered will be necessary to offset any damages incurred. Note that in this option, the customer is forced to assess risk (frequency of loss as well as its impact). The latter scenario offers us another unique option. First, the customer has to assess that whatever the damages they have yet to incur, it is greater than the costs of switching accounts, which is not negligible. One must account for the time spent locating new providers and financial advisors, modifying automatic drafts and direct deposits, opening new accounts, and signing paperwork. This time is not trivial and says nothing of the most important factor in making this switch: is the firm that you are moving to more secure than the former? In truth, the average consumer will not know. They may choose a company that was not recently in the news for such problems (relying on secondarily observable, yet still latent, measures of security), but that does not mean that problems have existed in the past or will exist again. Indeed, the security of the new firm is just as opaque as the one the consumer just left. While switching may satiate their moral outrage, in truth it does nothing to aid them in increasing the security of their accounts.

This is but a brief analysis of the role that economics plays in describing the behaviors, decisions, and attitudes of consumers and their security choices. However, it does help to better ascribe actions of large groups of people. For instance, it shows why most consumers won’t switch their business to another financial provider following a breach (repeat offenders, and especially those with failures in quick succession excluded). One may call this kind of behavior irrational, and indeed, many in the security community do just that by predicting wave after wave of defecting customers in a catastrophically spiraling disaster of attrition. Instead, what we see is in direct opposition to what was predicted. It can be said that when such a conflict exists between reality and a model, reality wins. Economic principles, as applied to information security, can help explain why one model has failed, and why another model might be more correct.

Open Group Podcast on Risk – June 2013

I participated in my second risk management podcast for the Open Group that was published today. I like this one better than my previous one–I tried to talk slower in this one anyways  ;-)

I was happy with the topics that we discussed, most notably that as regulators become more aware of the capabilities of quantitative risk assessment techniques they will begin demanding them from those they are reviewing. Of course, Jack and Jim were great as well and the conversation was expertly moderated by Dana.

Substituting Risk Tolerances

push-button-receive-baconI hate hand dryers in washrooms. I’m not alone: if Wikipedia is to be believed, 63% of people preferred paper towels over hand dryers in restrooms. I’d wager the other 37% choose what they thought was the right answer. Each time I use them, I always end up with cold, wet hands and if I’m forced to stand in front of them, water all over my clothes. I try to stand to the side and I one time watched the blower fling water all the way across the restroom–no small feat. Surely that wet, slick floor I left behind creates a terrible safety hazard. Heck, there is even a dispute about how much more environmentally friendly they are (if full cost environmental impact accounting is to be believed). My problem stems from the simple fact that they largely fail at their stated purpose, that of drying my hands quickly.

So if they are mostly hated, then why do companies implement them? Well, to put it bluntly it’s not like you are going to shop somewhere else because they have hand dryers there. If studies are to believed then I guess companies can save 99% of the cost of paper towels in a single year.

So what does this have to do with risk? Hand dryers (to me at least) are a clear case of substituting risk tolerances. Allow me to explain. When you are done washing your hands, your primary goal is to dry your hands and get out of there as quickly as you can. You are probably not thinking about saving the world with your hand drying choice or even saving money for the business you are at. Your priority here (I often equate priorities with risk) is in direct conflict with the host company. In fact, if its your employer that has the hand dryer, then it means they’d rather you stand there for some indeterminable time until your hands are dry versus getting back to your post as quickly as possible. Okay so may you save a minute or two (I think most people just give up and wipe their hands on their pants, defeating the purpose), but multiply that by how many trips per day times how many people and its no small investment (I used to work with process engineers that thought about stuff like this all the time).

You may be thinking that I’m neurotic about this, and you may be right, but when you think about risk constantly like me you start to see it everywhere. And the hand dryer scenario is not unique. While waiting in line at IKEA at closing time one night, someone in our party asked why they didn’t open up more lanes. The answer is simple–what’s the odds that after spending the last couple hours shopping and schlepping your purchases to the sole closing-time cashier that you would abandon them and sacrifice the last few hours of your life. Slim to none I’d say. Here too is a risk-based decision. They are accepting marginal dissatisfaction in order to save some money on a second or third cashier.

These sorts of trade-offs happen all the time and we hardly notice them. Usually they involve discounting the value of time–yours and mine–in favor of cost avoidance. I try and make these scenarios plain in my mind. I want to know when the value of my time has been discounted. I have less personal tolerance for my time being wasting and I often seek out scenarios where I pay a premium to have more personal time in my life.

How often has your personal risk tolerances been violated without your explicit knowledge? Perhaps its time to manage your resources better…

Private Sector Perspectives on Cyberwar

I sat through a presentation recently about cyberwar. Its a topic that engenders a lot of passion in the information security community. There seems to be a natural line drawn between those with previous experience in the military and government and those with primarily private sector experience. The typical military/government professional will attempt to engender a response from those in private industry. Largely those in private industry yawn (I’m excluding government contractors). And I think this is largely the right response.

Generally speaking, I want those in government to care a lot about war and I want private industry to focus on creating value for investors, customers,  and other stakeholders. A lot of cyberwar discussions talk about “kinetics” or whether there is physical destruction. In large part, most private sector companies will not be able to withstand any sufficiently effective physical attack. This is due to these organizations subscribing (implicitly or explicitly) to the theory of subsidiarity, which states in part that whatever can be accomplished at the lowest and smallest level of endeavor should so be conducted. Clearly, conducting and participating in war (cyber or otherwise, kinetic or not) is not the domain of the private sector. After all, military action is what our taxes fund (amongst other things). There is history of the private sector being targeted by miltary action; taking out communications or other means of production and/or agriculture is a time-tested technique to bring you opponent to their knees. We don’t typically see this kind of technique in modern warfare, but its common to apply pressure to the citizenry in order to force the hands of the political leaders to yield to their enemy’s demands. In my opinion, this is the form in which we will see cyberwar – attacks against the private sector in order to force the hands of politicians.

So back at the presentation, the speaker responded to the seemingly innocuous question of whether or not we could win the cyberwar. He answered this question with a question: have we ever won a war? Well yes, of course we have. I quickly rattled off a few to the colleagues sitting at the table with me: WWII, WWI (although not well enough to avoid WWII), Civil War, heck the Revolutionary War, etc., etc. If the question was meant, or interpreted to mean will we ever not have cyberwar, then clearly the answer is no, but yes, we can of course win wars and skirmishes that may arise in our future. However there will always be an ever-present threat on the horizon that will demand vigilance at some level.

So how do you prepare for these kinds of skirmishes? Well, it depends on the threats you are defending against. Sophisticated nation states will likely represent the 90th, 95th, 0r even 99th percentile of attackers. To be clear, for most organizations, you can spend yourself into oblivion defending against these kinds of attackers. However, the same organizations are likely not doing an effective job of defending against the strength of attackers at even the 50th percentile of attackers. Like all risk assessments, context matters and none more so than cyberwar. Your organizations’ insurance policies probably don’t even cover acts of war, so if you think that cyberwar is a concern for your organizations then you have more exposure in other places. Security is often surprisingly boring, and here is a great example: to defend against that 90th percentile of attacker, you probably have to start by doing a good job defending against the lower-tiered attackers. Focus on patching, currency, and user access. Its boring but has good payoffs. Attend the conference and enjoy the cyberwar talks, but don’t forget the basics.

Pizza Sauce and Security

We conducted a yard sale last week. If you’ve ever done this, then you know the turmoil over pricing. Your stuff is valuable to you, but there is often a hard reality that hits you when you try and extract that value from the public. Put simply, your stuff typically isn’t worth what you think.

Pricing your security services reflects a similar statement of risk. Many organizations mandate a security review as a part of their SDLC (and if they don’t, they should). Paying for this is an interesting conundrum. Once upon a time, I developed a metaphor that I thought was useful for getting to the root of the pricing problem. I called it “Pizza Sauce.” At the time, we were trying to develop a way to price the value that we thought security could add to software development projects. The problem that we came to quickly was that people thought security was already a part of the price (at the time, we were selling to 3rd parties not internal organizations, but the metaphor works either way). I equated it to a pizza: if you ordered a pizza, you assume it comes with sauce. You’d be insulted if you received a bill for the pizza with a line-item for sauce. Similarly, there is a negative perception associated with adding a line-item for security (If I don’t pay extra you’ll make it insecure?). So let’s assume that you created a really amazing, brand-new sauce. You can’t charge extra for the sauce, but you can include pricing to reflect that value in the overall price of the pie.

Security needs to be priced similarly – namely, since people already assume there is security baked in, you need to include that pricing in the overall cost in a way that doesn’t encourage people to skip it to reduce costs. For many organizations this can include listing security personnel on project plans at a zero dollar bill rate, or to include security in the overhead charged to cost centers for general IT services.

The key take-away is to ensure that you price security to extract value but not so high as to encourage circumnavigation.