Risk Response Requires Critical Thinking

My @ISACA column was published today. Read it here.

 

Edited:

I realized they edited the full submission I made (I could tell because it sounded a little off from what I recalled). Below is the full post:

 

Depending on your point of view, risk management is either a very easy or a terrifically difficult job. If you approach IT risk management from a controls perspective (as in, “This asset doesn’t have all the controls listed here. That’s a risk.”), then risk management is very easy for you. Simply add the missing control and everything’s back to normal. If anyone objects to your solution, it’s very easy to show them the worst that could happen, and paint them as an irresponsible steward of your organization in order to get the funding you need.

 

If, however, you feel that the control deficiency calls for some analysis, then risk management is much more difficult. In order to analyze the risk, you need to conduct research to understand which assets reside on that system, how often it is attacked from various threat communities, and the cumulative strength of the remaining controls. This approach involves building a model of attack sequences with associated probabilities and losses and considering the risk scenario in the greater context of the organization’s goals, objectives, and overall risk posture. In other words, this approach is risk analysis in support of well-informed risk management.

It is certainly easier to respond emotionally with phrases such as “I feel like this is a high,” or “I think our customers would be upset,” or even, “Our CEO could end up in jail!”  Its a very rare scenario where we hear, “The analysis has shown…” Imagine buying insurance where the agent tells you they “feel” like you are high risk but are unable to tell you why. At best, emotional responses like these support misallocating company resources on unnecessary controls. At worst, it may make it difficult for your company to effectively compete in an evolving marketplace. Practicing risk professionally means eschewing an emotional response in favor of risk analysis. An emotional response to risk is not a valid substitute for critical thinking.

Despite all my rage…

rat

I recently had the privilege to have some discussions with fellow members of a privacy-oriented group. They were mostly lawyers, and after a series of discussions we waded into the current disapprovals over Nordstrom’s practice of tracking people by Wifi (see here for more on this). Basically  its the implied consent that seems to be getting people up in arms. That and this natural tendency to get riled up about technological-based tracking in general. I interjected that this really isn’t very different from just tracking customers by camera and reviewing the tapes after the fact. Admittedly the automated element here makes this slightly different, but at its base, its still the same to me–after all, are you consenting to be recorded as you walk through the store?  No, its implied and we’ve all mostly moved beyond our concerns about being recorded. But then I remembered something much more central to this debate! Allow me to paint a picture.
A very good friend of mine from college (and high school actually) was an electrical engineering major. He had a job with a company that made lab rat cages. They sold to pharmaceutical companies, universities,  you know, any place that needed something to put their white, red-eyed rats into. So why did they need an EE on staff? Well, his job was to design a monitoring solution for these cages. He configured a USB camera to record the rats, then wrote some software that divided the camera’s field of vision into a grid. When the software detected movement in one of the grids, it incremented a counter and provided some reporting capabilities  Researchers would use this to determine how often the rats went to the water dish, spent time at the food bowl, hit the “gym” wheel, etc.

There is absolutely nothing stopping an existing retailer from applying this technological approach (which is approaching two decades old now) using nothing more than the surveillance videos already in place. I’m willing to wager this is current state for a lot of retailers.

So really, let’s put our big boy and girl pants on and don our risk hats. Look at this holistically — if I configure a wireless access point to record requests for attachments by MAC address then I correlate some logs between various devices, its really no different that them tracking you like a rat in their cage. I mean store.

I think a lot of the privacy industry is invested in outrage–that is, greeting all new technological advances and permutations of common practice as an outright infringement of natural law and civil rights. As always, it falls upon the risk profession to act as the saucer–cool the hot coffee of others into a productive risk discussion.

I want what they’re having

jumpWhen consulting on a security issue, one of the questions that makes me grind my teeth more than any other is some variation of, “What’re our competitors doing?” My initial reaction is always, “Who cares?” Its really just a useless way to think about security and risk.

In my experience, no one asks this question because they are looking for a way to spend more on security, layer in additional controls to reduce fraud, or simply to reduce risk. No, this question is almost always asked as an offensive against perceived unreasonableness by information security. Its a political tool or a negotiating tactic to cause you to back down. Which should be enough of a reason to dismiss it outright, but there is more nuance to this that causes it to be distasteful.

Your IT risk  decision-making is not a commodity market. Sure there are security commodities, however the decision making cannot be outsourced to other organizations. Think about it, what if you dutifully came back with an answer to this question indicating that not only are our competitors doing not just what  you are recommending but significantly more. Their budget for this is 5 times what you were planning to spend.

Would they then immediately write a check for that difference? Offer an apology to you and then shuffle out the door defeated? No, of course not. Nor should they. The risk tolerance, assets, lines of credit, cash flow, customers, budget, product mix, public profile, threat agent action, loss scenario probabilities are not yours. Simply put your competitor’s risk tolerance and appetite is not yours. As a result, you need to make the best decisions you can with the best (quantitative) data that you have at your disposal. Of course you should seek inspiration from various sources, if you can get it. I love the notion that security folks are a chatty sort that dish endlessly about the goings on in their companies. Security professionals should be fired for such action — you don’t want chatty security people working for you. Information sharing regimes, processes, and protocols exist, but data sharing at that level tends to be categorical which isn’t often useful enough to answer the question being posed. There is one exception to my rant however and that is legal. They probably are the ones who would advocate that budgets and controls be increased to reflect the posture of other organizations. Except legal won’t fund anything, so you have to go to the business anyways.

Negligence and Compliance

drudgeryCompliance is out of control. Its pervasive in our society now and there is no going back. Allow me to explain.

My kid attends pre-school. They go outside daily to play, so we were asked to provide some sunblock. Makes sense, our family is pale so we are used to that routine. We brought it in, signed a legal release (sigh), and we were good to go.

Or so we thought.

We receive an email later in the day saying that they cannot use an aerosol can and we need to provide sunblock that is a cream. Now this wasn’t communicated to us previously so that’s disappointing  but the real issue is the promulgation of the phrase, “It’s our policy…” The use of this term is quickly becoming a death of a thousand cuts.

How far is this to be taken? Would they have compelled my kid to go outside in the sun to burn, while the unopened sunblock sat idly by, not protecting them from an inappropriate amount of UVA/UVB? Would they have sat self-satisfied that policy boxes were checked while children roasted in the midday sun?

“It’s our policy that we don’t use aerosol cans to apply sunblock. It might get in their eyes.”

Well its not pepper spray; its not meant to be sprayed in the eyes. Everyone knows the trick about spraying it into your hand and then apply it to your face. I’m about ready to build my own set of personal policies (“That’s unfortunate, but its my policy that children not burn in the sun when sunblock is within arm’s reach”), effectively pitting policy against policy in a byzantine Mexican standoff of bureaucracy and drudgery.

Since I see the world through a risk lens, I see this as a failure in risk management. Which would have exposed this organization to greater risk? The remote possibility of face spraying, or the near certitude that skin will burn? In this case, the robotic adherence to policy actually INCREASED risk in the organization by promoting what is effectively negligence.

Thankfully, the outside activity that day took the kids through a shady grove, so no sunburn ensued, but this is a great example of where compliance regimes exceed risk tolerance and that actually increases risk.

Frequency Matters

13death

So there are a lot of ways to die. Like a lot. We worry about obscure ways to die. Its gruesome really, to die via an asteroid or “space junk” strike (so much so that we make TV shows about it), hockey puck death, or obscure elevator amputations.

…sort of like the various ways that IT security failures can cause security incidents. Now I’ve argued in the past that not all failure is bad, but in this post I want to talk about an important distinction that is often missed in risk assessments and that’s a focus on temporal factors. Put plainly, time matters.

This article and accompanying graph are a great way to organize some common ways to die (if you are looking for something to do on a Friday night). But it includes something that is missing from a lot of IT risk assessments: time.

Many assessment methods will tell you to assess “likelihood” such that you end up with some values like 80% or “Medium” etc. Now if you’ve been around me for any length of time, you will know that I quote “Fight Club” prodigiously to explain the problem with these values: “On a long enough timeline, the survival rate for everyone drops to zero.” And that’s why frequency matters. 80% what, tomorrow? In the next week? Year? Ever? Imagine if weather forecasts went the same way: just a picture of a rain cloud, no date, no day of the week, just a number that says 80% chance of rain. Should you bring an umbrella tomorrow? Next week? When?

So this is why I let my kid hold a baby alligator. Because, honestly, the odds of death by baby alligator are like, really really rare. Like 1/50 years or more (I dunno, I’m not an alligator expert). Plus, its mouth was taped up so yeah. Controls and such. And now she has a cool life experience and picture she can cherish :-)

So the next time you see likelihood without any reference to time period, call Shenanigans. Its bogus science and you don’t have to tolerate it.

High Accumulation

1280px-Snowy_street_in_Madrid_(Spain)_01I recently relocated to Charlotte from Ohio. Its South, but not so much so that it doesn’t get cold and yes, sometimes there is even snow. As I become acclimated to things down here, I am always surprised at the response that folks from here have to snow. They dislike it immensely and are often fearful of it. Now, I grew up in Pittsburgh which has a lot of snow. Ohio has a lot of snow too. So, this past weekend we had some weather reports that hinted at snow. They used a particular term that peaked my interest. The weather forecasters predicted that there would be a “high accumulation” of snowfall.

This is always the difficulty with verbal labels used to define measurements. Being that I am from the North, where snowfall is frequent, High to me means 6-8 inches or even 1 foot of snow or more. I imagine those from even farther North than I, probably laugh at my ranges and speak only in double-digit feet when measuring “high” amounts of snow. As it turns out, here in Charlotte “high accumulation” means between 1-2 inches. Oh, and that snow was mostly melted about a day and half later (for those that don’t know, this is a marginal amount of snow and the ensuing overreaction is largely comical to us Northerners).

When communicating risk, the same problem is endemic to that process. Just saying high, medium, or low is problematic. No one is able to divorce themselves from their biases and experiences. As a result, when you say “high risk,” there will invariably be those that think $10M, and others that are thinking in terms of $100K or even less. Why? Fundamentally speaking its because there are no numbers. Think about how much more plain it is to speak in terms of dollars or inches. We may disagree about what the relative impact of those units of measure, but one is not likely to argue that an inch isn’t an inch.

So as you go through your risk work, know that if you aren’t speaking plainly in terms that are universal (frequency and magnitude), then know that you may be perceived as shouting at clouds…

It’s always about the money

crumpled_bill[1].rI live in a fairly straightforward world, I guess. I’m often accused of being naive but I’d argue I feel enlightened. Put bluntly, I think risk is always about money.

This has to do with my education from Douglas Hubbard. When I hear nebulous problems I know that I can use the methods of Enrico Fermi to decompose them by asking Fermi questions. Most people think this too simplistic for their complicated world, but I’ll side with the Nobel prize winner on this one thank-you-very-much.

I often have to answer for my position when I stoically say that reputation can be measured in dollars. “But Jack,” folks will say, “its so hard to measure that.” I disagree. Why does anyone care about reputation? Well, for businesses, its about the ability to retain and acquire business. For whatever scenario you are analyzing, make some calibrated estimates of how much business you are likely to loose (use ranges) and suddenly you’ve applied some really complicated quantitative methods in a very practical and straightforward manner.

So I was having a discussion about control automation and the reasons organizations spend money on just such a thing. Now, since my worldview on this is that risk = money, it was straightforward for me. Organizations automate controls in order to save money. The howling in protest was fierce. “Jack,” they said, “automation speeds things up so the answer must be efficiency.” “Why do we care about being efficient?” I countered. “Jack, automation also reduces errors, so effectiveness is the answer,” said another. “Why do we care about being effective?” I countered again. Indeed, in all aspects of risk, the answer can be boiled down to money. I want to be faster so I can have the same resources do more work, saving me the cost of buying more. I want to be more effective so that I don’t have more loss events, saving me the cost of responding to them. The other common argument I hear is to blame regulators or auditors, as in “we need to automate to satisfy the auditors.” Once again decomposition helps us analyze this approach–why do we need to satisfy them?

When presented with these sorts of questions I come clean and admit that money drives risk and priorities. We spend money on things we care about, and we spend money to avoid losing money elsewhere. There are countless examples of us using money as a measure of time as well, as in how much would you spend to avoid waiting in line at Disney World? Turns out, there is a service where you can pay to pay someone to wait in line for you, making your holiday that much more pleasurable (although I think its only open to celebrity visitors–and priced for them as well).

Its not cynical, its enlightened. Once you accept this basic premise, all your “hard” problems get that much more easier to model.

Substituting Risk Tolerances

push-button-receive-baconI hate hand dryers in washrooms. I’m not alone: if Wikipedia is to be believed, 63% of people preferred paper towels over hand dryers in restrooms. I’d wager the other 37% choose what they thought was the right answer. Each time I use them, I always end up with cold, wet hands and if I’m forced to stand in front of them, water all over my clothes. I try to stand to the side and I one time watched the blower fling water all the way across the restroom–no small feat. Surely that wet, slick floor I left behind creates a terrible safety hazard. Heck, there is even a dispute about how much more environmentally friendly they are (if full cost environmental impact accounting is to be believed). My problem stems from the simple fact that they largely fail at their stated purpose, that of drying my hands quickly.

So if they are mostly hated, then why do companies implement them? Well, to put it bluntly it’s not like you are going to shop somewhere else because they have hand dryers there. If studies are to believed then I guess companies can save 99% of the cost of paper towels in a single year.

So what does this have to do with risk? Hand dryers (to me at least) are a clear case of substituting risk tolerances. Allow me to explain. When you are done washing your hands, your primary goal is to dry your hands and get out of there as quickly as you can. You are probably not thinking about saving the world with your hand drying choice or even saving money for the business you are at. Your priority here (I often equate priorities with risk) is in direct conflict with the host company. In fact, if its your employer that has the hand dryer, then it means they’d rather you stand there for some indeterminable time until your hands are dry versus getting back to your post as quickly as possible. Okay so may you save a minute or two (I think most people just give up and wipe their hands on their pants, defeating the purpose), but multiply that by how many trips per day times how many people and its no small investment (I used to work with process engineers that thought about stuff like this all the time).

You may be thinking that I’m neurotic about this, and you may be right, but when you think about risk constantly like me you start to see it everywhere. And the hand dryer scenario is not unique. While waiting in line at IKEA at closing time one night, someone in our party asked why they didn’t open up more lanes. The answer is simple–what’s the odds that after spending the last couple hours shopping and schlepping your purchases to the sole closing-time cashier that you would abandon them and sacrifice the last few hours of your life. Slim to none I’d say. Here too is a risk-based decision. They are accepting marginal dissatisfaction in order to save some money on a second or third cashier.

These sorts of trade-offs happen all the time and we hardly notice them. Usually they involve discounting the value of time–yours and mine–in favor of cost avoidance. I try and make these scenarios plain in my mind. I want to know when the value of my time has been discounted. I have less personal tolerance for my time being wasting and I often seek out scenarios where I pay a premium to have more personal time in my life.

How often has your personal risk tolerances been violated without your explicit knowledge? Perhaps its time to manage your resources better…