Welcome to 2020! Cyber Risk Prospectuses and a “Manifesto”

Welcome to 2020!

I kept busy last month, even with the holidays. Here are some updates:

I wrote a piece for ISACA about how much spending is being done in aggregate for cyber security and how we need to rationalize the controls we are spending on.

The FAIR Institute called this my manifesto here :-)

I’m also really excited that my article on Cyber Risk Prospectuses was published over in ThreatPost. I’ve been talking about this topic for about a year now. I’m not a fan of us pretending that we work for companies that won’t get hacked. It’s not if its when and being clear about how long before we expect that loss is important. The FAIR Institute summarized my point succinctly: “Admit you will probably get breached.”

 

 

 

 

 

Risk Frameworks, Equifax, and Public Sector Risk

Time for another cyber risk roundup!

 

 

 

 

 

 

Using Risk to Justify Security Strategy and Spending

I wrote a piece for RiskLens* recently that talks about how to utilize FAIR for building and justifying an information security budget and strategic initiatives. Its an interesting problem space as there is a need to have the appropriate level of abstraction (program level versus technology level) but its also a very solvable problem to add risk reduction justification to these annual budgetary exercises.

Fun story: one time I did this exercise years ago, I actually rated one initiative as *increasing* risk. It started an interesting discussion but the lesson is that not everything will result in less risk to your organization. Budgeting is a complicated amalgam of math, politics, and priorities; be sure to bolster your budgeting process with some risk arguments.

Click here for the RiskLens article: How CISOs Use FAIR to Set Strategic Priorities for Spending

*I am a professional advisor for RiskLens

Cyber Deterrence

I was reading up on cyber deterrence today and ran across this little gem in relation to nuclear deterrence:

Because of the value that comes from the ambiguity of what the US may do to an adversary if the acts we seek to deter are carried out, it hurts to portray ourselves as too fully rational and cool-headed. The fact that some elements may appear to be potentially “out of control” can be beneficial to creating and reinforcing fears and doubts within the minds of an adversary’s decision makers. This essential sense of fear is the working force of deterrence. That the US may become irrational and vindictive if its vital interests are attacked should be a part of the national persona we project to all adversaries.

–Essentials of Post Cold War Deterrence (1995)

Source: http://www.nukestrat.com/us/stratcom/SAGessentials.PDF

 

Using Economics to Diagnose Security Model Failure

asymmetryMany information security practitioners labor daily to increase security for the organizations in which they work. The task itself seems beset with obstacles. On the one hand, there is the need to acquire security funding from executives that are distracted from security by the sturm und drang of the daily operation of the business, tempered by the need to embed long-term strategy in the hearts and minds of its employees. On the other hand is the near-daily obliviousness of the employees they are instructed to protect. They deal with too many clicks on too many phishing emails, accidentally unencrypted emails with government identification numbers attached, and the ever-present push to increase security awareness amongst a group that, at best recognizes that security has something to do with firewalls and at worst, gets in the way of the business generating the revenue its tasked with acquiring. While such a scenario may seem hopeless, it is perhaps better viewed through the lens of economics. Information security economics drive behaviors, decisions, and attitudes concerning the state of security in an organization.

In the dynamic of internal political battles over security funding as well as operations, it’s easy to overlook the other forces at play. Through the lens of economics, we can reveal additional levers that contribute to the decision-making criteria. One of these is the pervasiveness of asymmetric information. For the average consumer, making decisions that increase security is often very difficult, as they lack two things that can assist them in good decision-making. The first is the domain knowledge necessary to understand what good security looks like. The dynamic between the evolution of controls and the nature and skillsets of attackers appears to shifts daily. It requires nothing less than full time devotion to understand these environmental elements in order to make a fully informed decision, which is clearly more than the average consumer has time to devote. The second is the lack of ability to directly observe the environments they are trying to measure. Because they aren’t employed in the security function of the organizations who are offering them security, they are necessarily withheld clandestine information about said security. Information that is vital in coming to an optimal resolution on the state of security for an organization. Often consumers are left to more readily available, yet misleading, indicators of security. These secondary and tertiary, often latent, factors are more difficult from which to correctly derive an accurate measure of security.

An example of this battle of indirectly observable economic factors plays out in the world of financial services and banking. The average consumer may be notified by a bank that their information was in scope of a recent security breach. Such breach notification letters connote action yet offer assurance that any damages the customer may incur will be handled by an insurance provider. What is the customer to do? Should they follow the advice of the letter, that is, do not take action, just monitor their accounts for fraud and rest assured that the firm whom just lost their data will handle things, or should they move their accounts to another provider? Each customer has their own calculus for how to make these decisions. Some will accept the premise of the letter with an uneasy feeling, yet others will stand on moral outrage and move their financial accounts to another provider. Each decision is not without its drawbacks, however. In the former, the customer has to have assessed that while security failed once, it likely won’t again and that if it did fail, the coverage offered will be necessary to offset any damages incurred. Note that in this option, the customer is forced to assess risk (frequency of loss as well as its impact). The latter scenario offers us another unique option. First, the customer has to assess that whatever the damages they have yet to incur, it is greater than the costs of switching accounts, which is not negligible. One must account for the time spent locating new providers and financial advisors, modifying automatic drafts and direct deposits, opening new accounts, and signing paperwork. This time is not trivial and says nothing of the most important factor in making this switch: is the firm that you are moving to more secure than the former? In truth, the average consumer will not know. They may choose a company that was not recently in the news for such problems (relying on secondarily observable, yet still latent, measures of security), but that does not mean that problems have existed in the past or will exist again. Indeed, the security of the new firm is just as opaque as the one the consumer just left. While switching may satiate their moral outrage, in truth it does nothing to aid them in increasing the security of their accounts.

This is but a brief analysis of the role that economics plays in describing the behaviors, decisions, and attitudes of consumers and their security choices. However, it does help to better ascribe actions of large groups of people. For instance, it shows why most consumers won’t switch their business to another financial provider following a breach (repeat offenders, and especially those with failures in quick succession excluded). One may call this kind of behavior irrational, and indeed, many in the security community do just that by predicting wave after wave of defecting customers in a catastrophically spiraling disaster of attrition. Instead, what we see is in direct opposition to what was predicted. It can be said that when such a conflict exists between reality and a model, reality wins. Economic principles, as applied to information security, can help explain why one model has failed, and why another model might be more correct.

Open Group Podcast on Risk – June 2013

I participated in my second risk management podcast for the Open Group that was published today. I like this one better than my previous one–I tried to talk slower in this one anyways  ;-)

I was happy with the topics that we discussed, most notably that as regulators become more aware of the capabilities of quantitative risk assessment techniques they will begin demanding them from those they are reviewing. Of course, Jack and Jim were great as well and the conversation was expertly moderated by Dana.