Open Group Podcast on Risk – June 2013

I participated in my second risk management podcast for the Open Group that was published today. I like this one better than my previous one–I tried to talk slower in this one anyways  ;-)

I was happy with the topics that we discussed, most notably that as regulators become more aware of the capabilities of quantitative risk assessment techniques they will begin demanding them from those they are reviewing. Of course, Jack and Jim were great as well and the conversation was expertly moderated by Dana.

Most Likely Fined Like

hey girlA recent article in Insurance and Technology made me think about the nature of identity as it relates to information risk management. If we take a look at the list of companies from which data is being collected, I can’t help but wonder if there is enough similarity between these companies to make some basic risk assumptions about them.

If we think about the various loss forms that exist in a FAIR loss magnitude assessment, the one this helps with is Fines and Judgements. In other words, I’m drawing a line from the Cuomo’s request to a concept I’m calling “Most Likely Fined Like” (MLFL). There is an interesting element of this to me, namely that these companies are not all insurance companies. Many companies in this list would balk at being considered like each other. Some do life insurance, car insurance, others do health insurance, some do all this plus financial services, investments, etc. All of which contributes to various types of losses (things like primary value proposition are different obviously). These different companies have different public profiles as well which contributes to how often they will be attacked.

This sort of analysis is the core of a sophisticated risk analysis. Looking at secondary loss factors can be a tricky thing if as these values tend to get more abstract, but Most Likely Fined Like can be a good mental model to grab some data points from other companies and expand the pool of data from which you are extrapolating your ranges. You may get push back – “We don’t sell commercial auto policies,” or “We are a financial services company that happens to sell annuities.” I’m not defining corporate identity, strategy, or vision here. I’m trying to make a model of the reality in which we are operating. And I sense that amongst this list of companies, were they to experience a regulatory fine due to information security failures, you’d have a great data point for any of the others. This is a risk assessment technique that you can put in your pocket for the next time you are in a tough place identifying loss values.

Risk Response Requires Critical Thinking

My @ISACA column was published today. Read it here.

 

Edited:

I realized they edited the full submission I made (I could tell because it sounded a little off from what I recalled). Below is the full post:

 

Depending on your point of view, risk management is either a very easy or a terrifically difficult job. If you approach IT risk management from a controls perspective (as in, “This asset doesn’t have all the controls listed here. That’s a risk.”), then risk management is very easy for you. Simply add the missing control and everything’s back to normal. If anyone objects to your solution, it’s very easy to show them the worst that could happen, and paint them as an irresponsible steward of your organization in order to get the funding you need.

 

If, however, you feel that the control deficiency calls for some analysis, then risk management is much more difficult. In order to analyze the risk, you need to conduct research to understand which assets reside on that system, how often it is attacked from various threat communities, and the cumulative strength of the remaining controls. This approach involves building a model of attack sequences with associated probabilities and losses and considering the risk scenario in the greater context of the organization’s goals, objectives, and overall risk posture. In other words, this approach is risk analysis in support of well-informed risk management.

It is certainly easier to respond emotionally with phrases such as “I feel like this is a high,” or “I think our customers would be upset,” or even, “Our CEO could end up in jail!”  Its a very rare scenario where we hear, “The analysis has shown…” Imagine buying insurance where the agent tells you they “feel” like you are high risk but are unable to tell you why. At best, emotional responses like these support misallocating company resources on unnecessary controls. At worst, it may make it difficult for your company to effectively compete in an evolving marketplace. Practicing risk professionally means eschewing an emotional response in favor of risk analysis. An emotional response to risk is not a valid substitute for critical thinking.

Despite all my rage…

rat

I recently had the privilege to have some discussions with fellow members of a privacy-oriented group. They were mostly lawyers, and after a series of discussions we waded into the current disapprovals over Nordstrom’s practice of tracking people by Wifi (see here for more on this). Basically  its the implied consent that seems to be getting people up in arms. That and this natural tendency to get riled up about technological-based tracking in general. I interjected that this really isn’t very different from just tracking customers by camera and reviewing the tapes after the fact. Admittedly the automated element here makes this slightly different, but at its base, its still the same to me–after all, are you consenting to be recorded as you walk through the store?  No, its implied and we’ve all mostly moved beyond our concerns about being recorded. But then I remembered something much more central to this debate! Allow me to paint a picture.
A very good friend of mine from college (and high school actually) was an electrical engineering major. He had a job with a company that made lab rat cages. They sold to pharmaceutical companies, universities,  you know, any place that needed something to put their white, red-eyed rats into. So why did they need an EE on staff? Well, his job was to design a monitoring solution for these cages. He configured a USB camera to record the rats, then wrote some software that divided the camera’s field of vision into a grid. When the software detected movement in one of the grids, it incremented a counter and provided some reporting capabilities  Researchers would use this to determine how often the rats went to the water dish, spent time at the food bowl, hit the “gym” wheel, etc.

There is absolutely nothing stopping an existing retailer from applying this technological approach (which is approaching two decades old now) using nothing more than the surveillance videos already in place. I’m willing to wager this is current state for a lot of retailers.

So really, let’s put our big boy and girl pants on and don our risk hats. Look at this holistically — if I configure a wireless access point to record requests for attachments by MAC address then I correlate some logs between various devices, its really no different that them tracking you like a rat in their cage. I mean store.

I think a lot of the privacy industry is invested in outrage–that is, greeting all new technological advances and permutations of common practice as an outright infringement of natural law and civil rights. As always, it falls upon the risk profession to act as the saucer–cool the hot coffee of others into a productive risk discussion.

I want what they’re having

jumpWhen consulting on a security issue, one of the questions that makes me grind my teeth more than any other is some variation of, “What’re our competitors doing?” My initial reaction is always, “Who cares?” Its really just a useless way to think about security and risk.

In my experience, no one asks this question because they are looking for a way to spend more on security, layer in additional controls to reduce fraud, or simply to reduce risk. No, this question is almost always asked as an offensive against perceived unreasonableness by information security. Its a political tool or a negotiating tactic to cause you to back down. Which should be enough of a reason to dismiss it outright, but there is more nuance to this that causes it to be distasteful.

Your IT risk  decision-making is not a commodity market. Sure there are security commodities, however the decision making cannot be outsourced to other organizations. Think about it, what if you dutifully came back with an answer to this question indicating that not only are our competitors doing not just what  you are recommending but significantly more. Their budget for this is 5 times what you were planning to spend.

Would they then immediately write a check for that difference? Offer an apology to you and then shuffle out the door defeated? No, of course not. Nor should they. The risk tolerance, assets, lines of credit, cash flow, customers, budget, product mix, public profile, threat agent action, loss scenario probabilities are not yours. Simply put your competitor’s risk tolerance and appetite is not yours. As a result, you need to make the best decisions you can with the best (quantitative) data that you have at your disposal. Of course you should seek inspiration from various sources, if you can get it. I love the notion that security folks are a chatty sort that dish endlessly about the goings on in their companies. Security professionals should be fired for such action — you don’t want chatty security people working for you. Information sharing regimes, processes, and protocols exist, but data sharing at that level tends to be categorical which isn’t often useful enough to answer the question being posed. There is one exception to my rant however and that is legal. They probably are the ones who would advocate that budgets and controls be increased to reflect the posture of other organizations. Except legal won’t fund anything, so you have to go to the business anyways.

Negligence and Compliance

drudgeryCompliance is out of control. Its pervasive in our society now and there is no going back. Allow me to explain.

My kid attends pre-school. They go outside daily to play, so we were asked to provide some sunblock. Makes sense, our family is pale so we are used to that routine. We brought it in, signed a legal release (sigh), and we were good to go.

Or so we thought.

We receive an email later in the day saying that they cannot use an aerosol can and we need to provide sunblock that is a cream. Now this wasn’t communicated to us previously so that’s disappointing  but the real issue is the promulgation of the phrase, “It’s our policy…” The use of this term is quickly becoming a death of a thousand cuts.

How far is this to be taken? Would they have compelled my kid to go outside in the sun to burn, while the unopened sunblock sat idly by, not protecting them from an inappropriate amount of UVA/UVB? Would they have sat self-satisfied that policy boxes were checked while children roasted in the midday sun?

“It’s our policy that we don’t use aerosol cans to apply sunblock. It might get in their eyes.”

Well its not pepper spray; its not meant to be sprayed in the eyes. Everyone knows the trick about spraying it into your hand and then apply it to your face. I’m about ready to build my own set of personal policies (“That’s unfortunate, but its my policy that children not burn in the sun when sunblock is within arm’s reach”), effectively pitting policy against policy in a byzantine Mexican standoff of bureaucracy and drudgery.

Since I see the world through a risk lens, I see this as a failure in risk management. Which would have exposed this organization to greater risk? The remote possibility of face spraying, or the near certitude that skin will burn? In this case, the robotic adherence to policy actually INCREASED risk in the organization by promoting what is effectively negligence.

Thankfully, the outside activity that day took the kids through a shady grove, so no sunburn ensued, but this is a great example of where compliance regimes exceed risk tolerance and that actually increases risk.

Frequency Matters

13death

So there are a lot of ways to die. Like a lot. We worry about obscure ways to die. Its gruesome really, to die via an asteroid or “space junk” strike (so much so that we make TV shows about it), hockey puck death, or obscure elevator amputations.

…sort of like the various ways that IT security failures can cause security incidents. Now I’ve argued in the past that not all failure is bad, but in this post I want to talk about an important distinction that is often missed in risk assessments and that’s a focus on temporal factors. Put plainly, time matters.

This article and accompanying graph are a great way to organize some common ways to die (if you are looking for something to do on a Friday night). But it includes something that is missing from a lot of IT risk assessments: time.

Many assessment methods will tell you to assess “likelihood” such that you end up with some values like 80% or “Medium” etc. Now if you’ve been around me for any length of time, you will know that I quote “Fight Club” prodigiously to explain the problem with these values: “On a long enough timeline, the survival rate for everyone drops to zero.” And that’s why frequency matters. 80% what, tomorrow? In the next week? Year? Ever? Imagine if weather forecasts went the same way: just a picture of a rain cloud, no date, no day of the week, just a number that says 80% chance of rain. Should you bring an umbrella tomorrow? Next week? When?

So this is why I let my kid hold a baby alligator. Because, honestly, the odds of death by baby alligator are like, really really rare. Like 1/50 years or more (I dunno, I’m not an alligator expert). Plus, its mouth was taped up so yeah. Controls and such. And now she has a cool life experience and picture she can cherish :-)

So the next time you see likelihood without any reference to time period, call Shenanigans. Its bogus science and you don’t have to tolerate it.