Private Sector Cyberwar – part 2

I wrote about this last May, namely that so-called cyberwar events are not for the domain of the private sector to defend against. I made an argument that cyberwar perpetrators are in the upper percentiles of attackers (95% +) and that outside of building our organization’s control strength up to that level, let’s just leave cyberwar to governments.

With that a backdrop, I was fascinated by this article that identifies that this exact thing I outlined was actually happening. The bank BB&T has sought help from the government in warding off DDoS attacks believed to be from state-sponsored attackers. This line in particular seemed to reflect my posture on these types of attacks:

“BB&T…and others now say they have spent millions in warding off the attacks and can’t be expected to fend off such attacks from another government.”

If I read between the lines, they have spent a lot of money to raise their control strength, however against attackers in the 95th percentile, its just really outside of their responsibility to defend against it. Like in warfare of earlier times, its time for the generals to step up and keep the farms of the countryside from being destroyed lest they be unable to feed their armies in times of need.

How Security, Audit, and Risk should work together

My article on the role of audit and risk was published in the ISSA Journal this past October 2012. If you didn’t catch it then, you can find it here.

I began this article with a question, when did IT auditing become a profession. With that in mind, I want back to the original version of COBIT to find the answers. This led me down a familiar path: basically that I really don’t want audit doing risk. They will always feel compelled to provide a level of priority, which I would argue is always a statement of risk, but leave risk ranking to those groups that are expert at it.

Be the person on the phone

So I purchased some of those curly cue light bulbs (CFLs), but as I am prone to do, I got the wrong ones (the base wasn’t right). Also like I always do, I bought the giant big box store pack, so it made sense for me to return them. So my family and I roll up to the <big box warehouse store> and I head for the customer service desk. I make pleasantries with the Lady Behind the Counter and inform her of my desire to return these bulbs for a refund. She takes the package, looks it over, and asks where the Sticker is. Its at this moment, were this an 80s high school movie, that some DJ somewhere would cause the record to scratch. For you see, I had no such Sticker on my packaging. I so informed her, and she was exasperated. The greeter at the door was to interrupt my ingress, inquire about the returned merchandise in my hand, tag it with the Sticker and direct me to the customer service desk. Not having done so, there was no way they could possibly know that I didn’t take this off the shelf and walk directly to the desk to perpetrate some fraud.

“We’ll have to check the videotape,” she said.

At this point, I too was exasperated. I attempted to explain that I purchased this and showed her my receipt. She waved over the greeter who was unable to recognize me from the myriad throngs of people that had been so “greeted.” The Lady Behind the Counter began making calls up the ranks. My wife asks if I would like her and my daughter to wait.

“Oh yes,” I say, “having my family nearby makes me look less like a criminal.”

I hear the half conversation over the phone where the Lady Behind the Counter says, “Uh, $16. Oh, okay,” and then hangs up. “We’ll accept it this time, but next time…”

I’ve turned this exchange over in my head countless times since. How could they have authenticated me better? What sort of losses from this threat vector have they incurred that caused them to implement this program? I never had to get a sticker on my returns from the <big box warehouse store> back in Central Ohio…

I’ve used this story several times since as an illustration of the distinction between auditors and risk professionals. It is absolutely critical that somebody be in charge of checking tickets. You need a ticket to get into the show, or in my case a Sticker. The policy says you need a Sticker, so a Sticker is what’s required. It’s also critical that the person at the door check incoming merchandise and apply a Sticker. The former is the auditor the latter is more akin to IT operations. But what of the person on the phone? Ah! They were the risk manager you see. They understood that a policy violation occurred despite my having a valid receipt and a relatively honest looking face. They could have checked my purchase history to see that I spend A LOT of money at their establishment. Sure, they had video of the incident, but for $16, everyone had better things to do. That is a risk-based decision. That’s just being a human being in a room otherwise full of automatons and making a judgement call that there are better things to spent our finite resources on than less than $20 worth of light bulbs that I likely really did purchase.

This is why the notion of something called a “risk-based audit” is somewhat anathema to me. Sure, please do check controls in areas where there is risk in the business, but that will quickly give rise to the causality dilemma commonly referred to as a chicken or the egg scenario: if the audit is meant to reveal high risk areas, then how could we possibly use risk as in input to scoping the audit (which is the premise of the risk-based audit)?

To bring this back home, let me say that I absolutely want and need somebody issuing and checking tickets at the door. But I’d never mistake them for risk managers. And if you wish to progress in your careers as IT risk professionals, try being the person on the other end of that phone call, and stop sweating the small stuff because somebody’s probably trying to run off with a new TV while your squabbling over light bulbs.

Security is an Empty Gun

There is a point where a security exception ceases to be an exception and becomes the rule. Its at times like these that the information security department can swagger in and lay down the law. Put simply, security makes the rest of the business comport to its will, and if push comes to shove security can pull out its piece and compel the action it desires…or else!

Except its the “or else” thats really the problem. Like a modern day Barney Fife, Information Security has no bullets in its gun (we may have some in our breast pocket though-only to be used for emergencies).

This gun metaphor is very helpful for understanding two things about the practice of information security today. First (and obviously) there are the overt violent overtones associated with the imagery above. If we reflect on the perception of security over the past several decades, it’s clear that its viewed as an aggressor. Its a perception that is well earned–keeping things and people safe is by necessity an aggressive career choice, only to be undertaken by those enveloped with machismo. Except in the corporate world this approach is misplaced. Its reminiscent of the over-enthusiastic mall cop, or the former New York City police officer that is now a corporate physical security guard. And this metaphor too is an important lesson in the way information security could be perceived if misapplied (which reminds me of this scene from Goldeneye).

Adapting to the new reality of risk-based security means relinquishing the controls-based security approach that is endemic to the mall cop metaphor above. Which brings me to my second point: If we pull the trigger of that gun, the infecundity of controls-based information security is made plain for all to see. Simply saying “no” to new technology is not an act of machismo anymore; its an act of suicide. Its oftentimes denying the business the very thing it needs to survive. Whether it be cloud, mobile, social media, or BYOD, the modern IT landscape is ripe with opportunities for information security to enable top-line growth, or at the very least to reduce the bottom line. Like the stick-up artist that engenders such fear with its pistol, information security has the ability to effect change, just so long as it doesn’t actually shoot anyone. Hinting at the regulatory hammer(s) to which you are subject is the bullet–its just not in your gun.  Instead, partner with the business to protect against the bullet in the FTCs or PCI Council’s gun, lest you drop the hammer on yours and the business hears the emasculating click of an empty chamber.

In order to achieve success in modern information security programs, there must be an emphasis on the soft skills of negotiation and communication. Effectively communicating a risk scenario using a mature risk taxonomy (one that allows you to communicate threats, control deficiencies, vulnerability, and losses) gives risk decision makers the ability to execute a well-informed decision. And that, after all, is what information security is really all about: enabling decision makers with the information they need to determine if a risk is worth taking.

And now, “Looking Down the Barrel of A Gun” by The Beastie Boys. Apropos.


I’d like the medium please

I was thinking about risk heatmaps the other day and how organizations use different labels. Some stick with the tried and true: High, Medium, and Low. Oftentimes an interesting label is added: severe, important, serious, OMG, Armageddon, and then the highest, PCI. Intrinsically, these labels do little to communicate the relative risk. Research has indicated that not only do different people perceive the same label differently, this also holds true when they are told the underlying scale to which the label correlates. What struck me the other night, however, was something less academic: how similar this was to drink cup sizes.

I won’t do it justice here, but countless people have ranted about how there’s no such thing as a medium anymore and everything starts as a large and goes from there. I can’t tell you how many times I’ve been “corrected” while ordering a medium that it’s now called the large and so on. Seriously, I just want the middle option; I don’t care what it’s called. Starbucks has its own labels that eschew the common parlance, which has spawned its own backlash (I still won’t say “Tall” even if my life depended on it). Nevertheless, why do these labels even matter?

Well, to start, it expedites the ordering process. If we had to sit and wonder whether we wanted 32 or 18 ounces, we’d have a whole other imbroglio to rant over. Labels also enable those who control the labeling to manipulate the understanding of the labels. Perhaps you may recall several years ago the “helpful” starburst signs on the menu board informing you that the super-sized drink was the “best value”? Tactics like this make it easier for us to substitute our good decision making for the comforting messaging of the labels.

It’s because of this that I think the use of scales in the heatmap is so critical. Douglas Hubbard has outlined many of the reasons why an ordinal scale fails to communicate well, and I won’t recount them here, but ordinal scales do help facilitate some decision making under certain circumstances. The obligation of the heatmap-maker is to ensure that the appropriate interval or ratio scale is included along with the label. Something that says “Medium risk” and corresponds to “1 to 10 times a year” on the x-axis and “$100K to $1M” on the y-axis communicates the situation far better than just “Grande” I mean “Venti”…I mean Medium (Shoot!). Of course, once it’s well known throughout your organization “how much” and “how often” we are talking about when discussing a risk scenario, then the label “Medium” helps to facilitate executive decision making and allows risk issues to be compared and contrasted.

If you’ve ever been disappointed after ordering the “large” soda and being handed a cup that is clearly a “medium” you will understand the importance of the reference scales. What you and the restaurant thought was clearly being communicated (size of the drink cup) was based upon assumptions that the two parties did not share. Since there was no forum for communicating these expectations ahead of time, it left one party (you) dissatisfied with your purchase. Of course, the other lesson here is that if you spend time changing the scales (and do so frequently) then it makes it difficult for the organization to follow along because they still think “large” is the biggest cup you have. This will happen when you have staff moving between organizations (either intra- or inter-) where one person used to know what “severe” meant, but now that means Venti…I mean Medium (Shoot!). Adjusting to this change can only happen after the scales are communicated and used in reference to the labels. If the new staff cannot adapt to the new use of these labels, it means they are managing risk from their own calculus and not that of the organization’s, and that can be a dangerous thing. Remember, the only risk tolerance that matters is the organization for which you are managing risk. Your own thoughts on the matter should be filtered through that reality.

Well, that was exhausting. I’m off to find some “extra” strength aspirin…or maybe just some “regular” strength…

Knuckle Busters

Where I live, we have been experiencing a lot of severe weather and with it, power outages. Its always fascinating to students of risk to watch how organizations behave in these scenarios. Especially interesting are how retail establishments deal with payment issues.

I entered an office supply store the other day to purchase some equipment I needed. Its important to note that there were NO power outages this day. As I entered, I was told that they were unable to accept credit cards and could only take cash. Immediately, I asked why they don’t just use the “knuckle busters” to imprint the cards. They said they couldn’t do authorization. I gestured as if turning over the imaginary credit card in my hand and told them to call the number of the back. They repeated their lines and added that it was “company-wide.” I realized I wasn’t going to get through to them (nor could they make such decisions at the store level anyways), so I left to go to another store to purchase my wares.

We live in society that is increasingly going without cash. The latest IEEE Spectrum magazine spent an entire issue discussing the move to a cashless society. Its for this reason that not being able to accept payment cards during an outage seems entirely unreasonable. In fact, given the proliferation of cards and dearth of cash, one might not say “accept payment cards” and just say “accept payment.” Especially at the establishment I was at where most transactions wouldn’t be completed with the change and few bills I had on me.

So why then do so many places not have backup payment using a knuckle buster?

It’s likely just lack of planning, but lets assume a risk-based decision making model. We own the store and probably have numbers on how many transaction we do on a given day and for how much. If we add some information about how often the power/server/etc. goes out, we can come to an average amount of money that we think we would loose not accepting credit cards during an outage. But don’t stop there: we also have to work on the threat side as well. Consider how many people we think are going to defraud us during this time. It would seem to me (but I’m willing to be wrong) that there are not “knuckle buster” fraud rings waiting for outages to swoop in and buy up lots of office supplies with fraudulent credit cards that can’t be authorized in real-time.

A risk-based appraoch would establish some ceiling amount – say $250 – that the company was willing to accept card imprints for and move you on your way. After that, they’d have someone call the number to verify funds (during my visit the store was full of bored employees pushing brooms that could have esaily picked up a phone to call in the authoriztion for any amount, but I digress). I know that a well-known fast food restaurant that you’ve likely eaten at established a threshold for payment card transactions under which they don’t worry about online authorization. During the lunch rush, they try for online auth at all times, but if they don’t get it in a time they set, they don’t worry about it. Take your #5 and move on, we’ll deal with it later. And if you defrauded them, they agreed to eat cost (pun intended). This always seemed like a very reasonable approach to me and shows that they understand they stand to gain much more from a faster line, than from strict adherence to online authorization.

So about those storms and power outages? I bought a whole-house generator about 5 years ago. That tells you a little about our risk tolerance I guess :-)

Thus Wastes Man

A discussion on priority-making, risk, and the nature of humanity

I’m always interested in examples where we make implicit risk decisions. It happens naturally all the time, mostly because we lack the resources (time, skills) to properly evaluate the scenario. Despite being good at keeping us immediately out of harm’s way, this quick decision-making skill set (our “gut” reaction) tends to be wrong very often about long-term risk. Nowhere is this more prevalent than in our own health decisions.

The FAIR risk-assessment framework discusses and flowcharts the reasons for failure to comply with policy; however it is equally applicable to failures in decision making. At a high level, the flow chart goes like this: awareness, resources, motivation (evil, dumb, priorities). It’s usually the priorities that throw us for a loop: after I know what needs done, have the tools to do it, I have to want to do it. Since we’re not often evil or dumb (thank goodness), I have to make it a higher priority than the other things I care about. It’s the same reason that although I see the nail pop in my one wall all the time, I’m unlikely to ever really do anything about it (after all, I’m really busy with this blog and everything…).

It’s through these lenses (implicit decision making and the compliance flowchart) that I would like to discuss the following chart:

This is a chart provided by the FAIR Foundation on their website (no relation to the risk analysis method called FAIR). This chart details the US funding priorities for various disease (mostly -all?- NIH funding). I care about many of these diseases personally, as I’m sure many of you do. It’s because of this personal attachment (my gut reaction), that I’m immediately appalled at the funding priorities that exist. If we are being rationale about our resource allocation, then clearly the diseases that cause the most deaths need the highest levels of funding. On closer evaluation however, there is more to diseases than just death; many diseases substantially limit one or more of the major life activities (to borrow a phrase from the US American’s with Disabilities Act of 1990). Diabetes (especially Type 1) robs you of normal eating habits for the rest of your life, Alzheimer’s takes your mental faculties, and Parkinson’s the steals ability to move regularly (to just name a few – there are many horrible outcomes for many of these diseases).

So if we are all rationale humans, then why are these funding priorities what they are?

There’s a certain amount of complexity associated with these decisions. There is a system of systems responsible for these funding decisions, not the least of which is popularity (there are countless discussions like this happening all over the web). However, the reality is that all rubrics for funding will leave some people’s concerns out of the running. There just aren’t enough resources to go around.

I don’t have the right answers for this problem, but I wanted to use these chart as a mirror for our own IT Risk and Security funding priorities. There are doubtless many pet projects that will garner the most funding in your organization that will not have rationale support from a risk perspective. Fighting this gut-level decision making is the work of IT Risk professionals today. The same as the medical communities that argue for a risk-based approach to research funding, you too should be spending your time and efforts advocating for the reduction of risk in the scenarios that effect your organizations.

Given that you will never work for an organization that has in infinite budget for security (or anything really), nor will you have all the time needed to address every concern, you must prioritize efforts to ensure the best results. Priority-making is inherently a risk-based activity. This is the essence of modern risk management.

A drink after work

Your organization has a problem with its employees. Too many people are going to Happy Hour after work and spilling important information about future expansion plans and other details about top-secret intellectual property. This lack of operational security (OpSec) is starting to take a toll on the business. The company is loosing out on new opportunities, the competitors are undercutting its bids, and next-year’s new model is already being touted by the competitor. What’s worse is that you HR department is telling you that the next generation of employees grew up with Happy Hour and have very different thoughts about how it should be used. Their basic attitude is that all the “old folk” in the company need to get with it and start using Happy Hour. They don’t bother drawing a distinction between personal and work drinking and they don’t care about this OpSec problem. They’re completely aggro about it and are demanding that the company stop trying to keep them from Happy Hour.

After a long debate where many options were evaluated, the company finally has their solution to the problem: Company Bar. Yes, the plan is to renovate an office downstairs and install their own working alehouse, taproom, cocktail loungue, watering hole, and any other synonym for a place where beer, wines, and spirits are sold. This plan is genius! Now, instead of everyone leaving work to go to Happy Hour, everyone will just go downstairs after work and drink in an environment where no one has to worry about saying the wrong thing to the local purveyor of corporate espionage. Since they can act out their Happy Hour needs with corporate blessings, no one will feel the need to go to other establishments to wet their whistles.


These are my thoughts on the effectiveness of corporate social media sites as a control to limit information leakage (I’m looking at you Yammer)

Private Sector Perspectives on Cyberwar

I sat through a presentation recently about cyberwar. Its a topic that engenders a lot of passion in the information security community. There seems to be a natural line drawn between those with previous experience in the military and government and those with primarily private sector experience. The typical military/government professional will attempt to engender a response from those in private industry. Largely those in private industry yawn (I’m excluding government contractors). And I think this is largely the right response.

Generally speaking, I want those in government to care a lot about war and I want private industry to focus on creating value for investors, customers,  and other stakeholders. A lot of cyberwar discussions talk about “kinetics” or whether there is physical destruction. In large part, most private sector companies will not be able to withstand any sufficiently effective physical attack. This is due to these organizations subscribing (implicitly or explicitly) to the theory of subsidiarity, which states in part that whatever can be accomplished at the lowest and smallest level of endeavor should so be conducted. Clearly, conducting and participating in war (cyber or otherwise, kinetic or not) is not the domain of the private sector. After all, military action is what our taxes fund (amongst other things). There is history of the private sector being targeted by miltary action; taking out communications or other means of production and/or agriculture is a time-tested technique to bring you opponent to their knees. We don’t typically see this kind of technique in modern warfare, but its common to apply pressure to the citizenry in order to force the hands of the political leaders to yield to their enemy’s demands. In my opinion, this is the form in which we will see cyberwar – attacks against the private sector in order to force the hands of politicians.

So back at the presentation, the speaker responded to the seemingly innocuous question of whether or not we could win the cyberwar. He answered this question with a question: have we ever won a war? Well yes, of course we have. I quickly rattled off a few to the colleagues sitting at the table with me: WWII, WWI (although not well enough to avoid WWII), Civil War, heck the Revolutionary War, etc., etc. If the question was meant, or interpreted to mean will we ever not have cyberwar, then clearly the answer is no, but yes, we can of course win wars and skirmishes that may arise in our future. However there will always be an ever-present threat on the horizon that will demand vigilance at some level.

So how do you prepare for these kinds of skirmishes? Well, it depends on the threats you are defending against. Sophisticated nation states will likely represent the 90th, 95th, 0r even 99th percentile of attackers. To be clear, for most organizations, you can spend yourself into oblivion defending against these kinds of attackers. However, the same organizations are likely not doing an effective job of defending against the strength of attackers at even the 50th percentile of attackers. Like all risk assessments, context matters and none more so than cyberwar. Your organizations’ insurance policies probably don’t even cover acts of war, so if you think that cyberwar is a concern for your organizations then you have more exposure in other places. Security is often surprisingly boring, and here is a great example: to defend against that 90th percentile of attacker, you probably have to start by doing a good job defending against the lower-tiered attackers. Focus on patching, currency, and user access. Its boring but has good payoffs. Attend the conference and enjoy the cyberwar talks, but don’t forget the basics.

Pizza Sauce and Security

We conducted a yard sale last week. If you’ve ever done this, then you know the turmoil over pricing. Your stuff is valuable to you, but there is often a hard reality that hits you when you try and extract that value from the public. Put simply, your stuff typically isn’t worth what you think.

Pricing your security services reflects a similar statement of risk. Many organizations mandate a security review as a part of their SDLC (and if they don’t, they should). Paying for this is an interesting conundrum. Once upon a time, I developed a metaphor that I thought was useful for getting to the root of the pricing problem. I called it “Pizza Sauce.” At the time, we were trying to develop a way to price the value that we thought security could add to software development projects. The problem that we came to quickly was that people thought security was already a part of the price (at the time, we were selling to 3rd parties not internal organizations, but the metaphor works either way). I equated it to a pizza: if you ordered a pizza, you assume it comes with sauce. You’d be insulted if you received a bill for the pizza with a line-item for sauce. Similarly, there is a negative perception associated with adding a line-item for security (If I don’t pay extra you’ll make it insecure?). So let’s assume that you created a really amazing, brand-new sauce. You can’t charge extra for the sauce, but you can include pricing to reflect that value in the overall price of the pie.

Security needs to be priced similarly – namely, since people already assume there is security baked in, you need to include that pricing in the overall cost in a way that doesn’t encourage people to skip it to reduce costs. For many organizations this can include listing security personnel on project plans at a zero dollar bill rate, or to include security in the overhead charged to cost centers for general IT services.

The key take-away is to ensure that you price security to extract value but not so high as to encourage circumnavigation.