I’d like the medium please

I was thinking about risk heatmaps the other day and how organizations use different labels. Some stick with the tried and true: High, Medium, and Low. Oftentimes an interesting label is added: severe, important, serious, OMG, Armageddon, and then the highest, PCI. Intrinsically, these labels do little to communicate the relative risk. Research has indicated that not only do different people perceive the same label differently, this also holds true when they are told the underlying scale to which the label correlates. What struck me the other night, however, was something less academic: how similar this was to drink cup sizes.

I won’t do it justice here, but countless people have ranted about how there’s no such thing as a medium anymore and everything starts as a large and goes from there. I can’t tell you how many times I’ve been “corrected” while ordering a medium that it’s now called the large and so on. Seriously, I just want the middle option; I don’t care what it’s called. Starbucks has its own labels that eschew the common parlance, which has spawned its own backlash (I still won’t say “Tall” even if my life depended on it). Nevertheless, why do these labels even matter?

Well, to start, it expedites the ordering process. If we had to sit and wonder whether we wanted 32 or 18 ounces, we’d have a whole other imbroglio to rant over. Labels also enable those who control the labeling to manipulate the understanding of the labels. Perhaps you may recall several years ago the “helpful” starburst signs on the menu board informing you that the super-sized drink was the “best value”? Tactics like this make it easier for us to substitute our good decision making for the comforting messaging of the labels.

It’s because of this that I think the use of scales in the heatmap is so critical. Douglas Hubbard has outlined many of the reasons why an ordinal scale fails to communicate well, and I won’t recount them here, but ordinal scales do help facilitate some decision making under certain circumstances. The obligation of the heatmap-maker is to ensure that the appropriate interval or ratio scale is included along with the label. Something that says “Medium risk” and corresponds to “1 to 10 times a year” on the x-axis and “$100K to $1M” on the y-axis communicates the situation far better than just “Grande” I mean “Venti”…I mean Medium (Shoot!). Of course, once it’s well known throughout your organization “how much” and “how often” we are talking about when discussing a risk scenario, then the label “Medium” helps to facilitate executive decision making and allows risk issues to be compared and contrasted.

If you’ve ever been disappointed after ordering the “large” soda and being handed a cup that is clearly a “medium” you will understand the importance of the reference scales. What you and the restaurant thought was clearly being communicated (size of the drink cup) was based upon assumptions that the two parties did not share. Since there was no forum for communicating these expectations ahead of time, it left one party (you) dissatisfied with your purchase. Of course, the other lesson here is that if you spend time changing the scales (and do so frequently) then it makes it difficult for the organization to follow along because they still think “large” is the biggest cup you have. This will happen when you have staff moving between organizations (either intra- or inter-) where one person used to know what “severe” meant, but now that means Venti…I mean Medium (Shoot!). Adjusting to this change can only happen after the scales are communicated and used in reference to the labels. If the new staff cannot adapt to the new use of these labels, it means they are managing risk from their own calculus and not that of the organization’s, and that can be a dangerous thing. Remember, the only risk tolerance that matters is the organization for which you are managing risk. Your own thoughts on the matter should be filtered through that reality.

Well, that was exhausting. I’m off to find some “extra” strength aspirin…or maybe just some “regular” strength…

Knuckle Busters

Where I live, we have been experiencing a lot of severe weather and with it, power outages. Its always fascinating to students of risk to watch how organizations behave in these scenarios. Especially interesting are how retail establishments deal with payment issues.

I entered an office supply store the other day to purchase some equipment I needed. Its important to note that there were NO power outages this day. As I entered, I was told that they were unable to accept credit cards and could only take cash. Immediately, I asked why they don’t just use the “knuckle busters” to imprint the cards. They said they couldn’t do authorization. I gestured as if turning over the imaginary credit card in my hand and told them to call the number of the back. They repeated their lines and added that it was “company-wide.” I realized I wasn’t going to get through to them (nor could they make such decisions at the store level anyways), so I left to go to another store to purchase my wares.

We live in society that is increasingly going without cash. The latest IEEE Spectrum magazine spent an entire issue discussing the move to a cashless society. Its for this reason that not being able to accept payment cards during an outage seems entirely unreasonable. In fact, given the proliferation of cards and dearth of cash, one might not say “accept payment cards” and just say “accept payment.” Especially at the establishment I was at where most transactions wouldn’t be completed with the change and few bills I had on me.

So why then do so many places not have backup payment using a knuckle buster?

It’s likely just lack of planning, but lets assume a risk-based decision making model. We own the store and probably have numbers on how many transaction we do on a given day and for how much. If we add some information about how often the power/server/etc. goes out, we can come to an average amount of money that we think we would loose not accepting credit cards during an outage. But don’t stop there: we also have to work on the threat side as well. Consider how many people we think are going to defraud us during this time. It would seem to me (but I’m willing to be wrong) that there are not “knuckle buster” fraud rings waiting for outages to swoop in and buy up lots of office supplies with fraudulent credit cards that can’t be authorized in real-time.

A risk-based appraoch would establish some ceiling amount – say $250 – that the company was willing to accept card imprints for and move you on your way. After that, they’d have someone call the number to verify funds (during my visit the store was full of bored employees pushing brooms that could have esaily picked up a phone to call in the authoriztion for any amount, but I digress). I know that a well-known fast food restaurant that you’ve likely eaten at established a threshold for payment card transactions under which they don’t worry about online authorization. During the lunch rush, they try for online auth at all times, but if they don’t get it in a time they set, they don’t worry about it. Take your #5 and move on, we’ll deal with it later. And if you defrauded them, they agreed to eat cost (pun intended). This always seemed like a very reasonable approach to me and shows that they understand they stand to gain much more from a faster line, than from strict adherence to online authorization.

So about those storms and power outages? I bought a whole-house generator about 5 years ago. That tells you a little about our risk tolerance I guess :-)

Thus Wastes Man

A discussion on priority-making, risk, and the nature of humanity

I’m always interested in examples where we make implicit risk decisions. It happens naturally all the time, mostly because we lack the resources (time, skills) to properly evaluate the scenario. Despite being good at keeping us immediately out of harm’s way, this quick decision-making skill set (our “gut” reaction) tends to be wrong very often about long-term risk. Nowhere is this more prevalent than in our own health decisions.

The FAIR risk-assessment framework discusses and flowcharts the reasons for failure to comply with policy; however it is equally applicable to failures in decision making. At a high level, the flow chart goes like this: awareness, resources, motivation (evil, dumb, priorities). It’s usually the priorities that throw us for a loop: after I know what needs done, have the tools to do it, I have to want to do it. Since we’re not often evil or dumb (thank goodness), I have to make it a higher priority than the other things I care about. It’s the same reason that although I see the nail pop in my one wall all the time, I’m unlikely to ever really do anything about it (after all, I’m really busy with this blog and everything…).

It’s through these lenses (implicit decision making and the compliance flowchart) that I would like to discuss the following chart:

This is a chart provided by the FAIR Foundation on their website (no relation to the risk analysis method called FAIR). This chart details the US funding priorities for various disease (mostly -all?- NIH funding). I care about many of these diseases personally, as I’m sure many of you do. It’s because of this personal attachment (my gut reaction), that I’m immediately appalled at the funding priorities that exist. If we are being rationale about our resource allocation, then clearly the diseases that cause the most deaths need the highest levels of funding. On closer evaluation however, there is more to diseases than just death; many diseases substantially limit one or more of the major life activities (to borrow a phrase from the US American’s with Disabilities Act of 1990). Diabetes (especially Type 1) robs you of normal eating habits for the rest of your life, Alzheimer’s takes your mental faculties, and Parkinson’s the steals ability to move regularly (to just name a few – there are many horrible outcomes for many of these diseases).

So if we are all rationale humans, then why are these funding priorities what they are?

There’s a certain amount of complexity associated with these decisions. There is a system of systems responsible for these funding decisions, not the least of which is popularity (there are countless discussions like this happening all over the web). However, the reality is that all rubrics for funding will leave some people’s concerns out of the running. There just aren’t enough resources to go around.

I don’t have the right answers for this problem, but I wanted to use these chart as a mirror for our own IT Risk and Security funding priorities. There are doubtless many pet projects that will garner the most funding in your organization that will not have rationale support from a risk perspective. Fighting this gut-level decision making is the work of IT Risk professionals today. The same as the medical communities that argue for a risk-based approach to research funding, you too should be spending your time and efforts advocating for the reduction of risk in the scenarios that effect your organizations.

Given that you will never work for an organization that has in infinite budget for security (or anything really), nor will you have all the time needed to address every concern, you must prioritize efforts to ensure the best results. Priority-making is inherently a risk-based activity. This is the essence of modern risk management.

A drink after work

Your organization has a problem with its employees. Too many people are going to Happy Hour after work and spilling important information about future expansion plans and other details about top-secret intellectual property. This lack of operational security (OpSec) is starting to take a toll on the business. The company is loosing out on new opportunities, the competitors are undercutting its bids, and next-year’s new model is already being touted by the competitor. What’s worse is that you HR department is telling you that the next generation of employees grew up with Happy Hour and have very different thoughts about how it should be used. Their basic attitude is that all the “old folk” in the company need to get with it and start using Happy Hour. They don’t bother drawing a distinction between personal and work drinking and they don’t care about this OpSec problem. They’re completely aggro about it and are demanding that the company stop trying to keep them from Happy Hour.

After a long debate where many options were evaluated, the company finally has their solution to the problem: Company Bar. Yes, the plan is to renovate an office downstairs and install their own working alehouse, taproom, cocktail loungue, watering hole, and any other synonym for a place where beer, wines, and spirits are sold. This plan is genius! Now, instead of everyone leaving work to go to Happy Hour, everyone will just go downstairs after work and drink in an environment where no one has to worry about saying the wrong thing to the local purveyor of corporate espionage. Since they can act out their Happy Hour needs with corporate blessings, no one will feel the need to go to other establishments to wet their whistles.

————————-

These are my thoughts on the effectiveness of corporate social media sites as a control to limit information leakage (I’m looking at you Yammer)

Private Sector Perspectives on Cyberwar

I sat through a presentation recently about cyberwar. Its a topic that engenders a lot of passion in the information security community. There seems to be a natural line drawn between those with previous experience in the military and government and those with primarily private sector experience. The typical military/government professional will attempt to engender a response from those in private industry. Largely those in private industry yawn (I’m excluding government contractors). And I think this is largely the right response.

Generally speaking, I want those in government to care a lot about war and I want private industry to focus on creating value for investors, customers,  and other stakeholders. A lot of cyberwar discussions talk about “kinetics” or whether there is physical destruction. In large part, most private sector companies will not be able to withstand any sufficiently effective physical attack. This is due to these organizations subscribing (implicitly or explicitly) to the theory of subsidiarity, which states in part that whatever can be accomplished at the lowest and smallest level of endeavor should so be conducted. Clearly, conducting and participating in war (cyber or otherwise, kinetic or not) is not the domain of the private sector. After all, military action is what our taxes fund (amongst other things). There is history of the private sector being targeted by miltary action; taking out communications or other means of production and/or agriculture is a time-tested technique to bring you opponent to their knees. We don’t typically see this kind of technique in modern warfare, but its common to apply pressure to the citizenry in order to force the hands of the political leaders to yield to their enemy’s demands. In my opinion, this is the form in which we will see cyberwar – attacks against the private sector in order to force the hands of politicians.

So back at the presentation, the speaker responded to the seemingly innocuous question of whether or not we could win the cyberwar. He answered this question with a question: have we ever won a war? Well yes, of course we have. I quickly rattled off a few to the colleagues sitting at the table with me: WWII, WWI (although not well enough to avoid WWII), Civil War, heck the Revolutionary War, etc., etc. If the question was meant, or interpreted to mean will we ever not have cyberwar, then clearly the answer is no, but yes, we can of course win wars and skirmishes that may arise in our future. However there will always be an ever-present threat on the horizon that will demand vigilance at some level.

So how do you prepare for these kinds of skirmishes? Well, it depends on the threats you are defending against. Sophisticated nation states will likely represent the 90th, 95th, 0r even 99th percentile of attackers. To be clear, for most organizations, you can spend yourself into oblivion defending against these kinds of attackers. However, the same organizations are likely not doing an effective job of defending against the strength of attackers at even the 50th percentile of attackers. Like all risk assessments, context matters and none more so than cyberwar. Your organizations’ insurance policies probably don’t even cover acts of war, so if you think that cyberwar is a concern for your organizations then you have more exposure in other places. Security is often surprisingly boring, and here is a great example: to defend against that 90th percentile of attacker, you probably have to start by doing a good job defending against the lower-tiered attackers. Focus on patching, currency, and user access. Its boring but has good payoffs. Attend the conference and enjoy the cyberwar talks, but don’t forget the basics.

Pizza Sauce and Security

We conducted a yard sale last week. If you’ve ever done this, then you know the turmoil over pricing. Your stuff is valuable to you, but there is often a hard reality that hits you when you try and extract that value from the public. Put simply, your stuff typically isn’t worth what you think.

Pricing your security services reflects a similar statement of risk. Many organizations mandate a security review as a part of their SDLC (and if they don’t, they should). Paying for this is an interesting conundrum. Once upon a time, I developed a metaphor that I thought was useful for getting to the root of the pricing problem. I called it “Pizza Sauce.” At the time, we were trying to develop a way to price the value that we thought security could add to software development projects. The problem that we came to quickly was that people thought security was already a part of the price (at the time, we were selling to 3rd parties not internal organizations, but the metaphor works either way). I equated it to a pizza: if you ordered a pizza, you assume it comes with sauce. You’d be insulted if you received a bill for the pizza with a line-item for sauce. Similarly, there is a negative perception associated with adding a line-item for security (If I don’t pay extra you’ll make it insecure?). So let’s assume that you created a really amazing, brand-new sauce. You can’t charge extra for the sauce, but you can include pricing to reflect that value in the overall price of the pie.

Security needs to be priced similarly – namely, since people already assume there is security baked in, you need to include that pricing in the overall cost in a way that doesn’t encourage people to skip it to reduce costs. For many organizations this can include listing security personnel on project plans at a zero dollar bill rate, or to include security in the overhead charged to cost centers for general IT services.

The key take-away is to ensure that you price security to extract value but not so high as to encourage circumnavigation.

How to Play

Image

I recently took my daughter to a kid’s birthday party. The location had one of those kid’s gyms where you kick your shoes off and dive into the balls and have a great time. Risk never leaves my mind, so when I was reviewing the sign that was posted over the entrance to the area, I found an interesting parallel that I thought I’d share.

There was a sign posted that said, “How To Play,” followed by what is presumably a list of rules on how to play. The gate was guarded by a disinterested young man sketching on a pad and ostensibly enforcing the rules of play. What were those rules? See for yourself:

  1. No shoes or coats
  2. No running or jumping
  3. No throwing balls

What is missing from these list is exactly what the title of the sign said would be there: rules for playing. Instead, what we have is a list of how NOT to play. While my little one was playing she was having a difficult time getting up some of the ramps in her stockinged feet, so I slipped her socks off and sent her on her way. My wife chastised me because another sign somewhat out of sight indicated that socks were required. The disinterested young man from early failed to notice.

I think there are some clear parallels to corporate security polices in this brief example. First, information security policies rarely identify “How to Play.” Instead, like our sign example above, we frequently find a list of things you are not allowed to do. This is an example of security-centric thinking. Know this: the people in your company are interested in knowing How To Play. Tell them the approved technologies, processes, and systems that they are allowed to use without running afoul of the policy. This is the basic logic of a white vs black list, so help your organization know how to do the right thing (I’m assuming there’s more you don’t want them doing than otherwise, so save time and just tell them what to do).

Next, the metaphor of the disinterested enforcement agent I’m sure is not lost on most. Enforcement is tricky business, and worthy of longer treatment, but for today’s blog post focus on the economics of the situation. There was one guy at the entrance who ostensibly had responsibility for enforcing the rules in the entire area (it was very large with between 30-50 kids). Clearly he was going to fail at 100% enforcement. But just like in other areas of life, its often just as effective to selectively offer enforcement for those areas that are high-risk.

Lastly, don’t forget the allure of the one-stop-shop. Having everything you need someone to know in one place is valuable. Don’t make them hunt for that hidden sign to find out that bare feet are not allowed. Everything should be clearly visible and in one place.

In summary, we as security practitioners can make it easy or hard for people to comply. You get to decide, “How To Play” for your organizations. Choose wisely.