Smart Contracts

I was interviewed for, and quoted in, this ISACA publication around Smart Contracts.

Upon reflection, what we are really seeing is just a continuation of the concept of Code = Law as pointed out by Lawrence Lessig in his 1999 book, Code and Other Law of Cyberspace.

The Smart Contracts doc is a free download (after registration) and can be found here:


Risk and Regulation

My latest @ISACA article was published today. In it, I focus on the notion of where our authority comes from in Information Security. Too often, in my opinion, we rely on regulation as a source of “why” when articulating control requirements. I think this is dangerous and counter to the very nature of what an effective risk practitioner is.

Take a read and let me know your thoughts!


Be the person on the phone

So I purchased some of those curly cue light bulbs (CFLs), but as I am prone to do, I got the wrong ones (the base wasn’t right). Also like I always do, I bought the giant big box store pack, so it made sense for me to return them. So my family and I roll up to the <big box warehouse store> and I head for the customer service desk. I make pleasantries with the Lady Behind the Counter and inform her of my desire to return these bulbs for a refund. She takes the package, looks it over, and asks where the Sticker is. Its at this moment, were this an 80s high school movie, that some DJ somewhere would cause the record to scratch. For you see, I had no such Sticker on my packaging. I so informed her, and she was exasperated. The greeter at the door was to interrupt my ingress, inquire about the returned merchandise in my hand, tag it with the Sticker and direct me to the customer service desk. Not having done so, there was no way they could possibly know that I didn’t take this off the shelf and walk directly to the desk to perpetrate some fraud.

“We’ll have to check the videotape,” she said.

At this point, I too was exasperated. I attempted to explain that I purchased this and showed her my receipt. She waved over the greeter who was unable to recognize me from the myriad throngs of people that had been so “greeted.” The Lady Behind the Counter began making calls up the ranks. My wife asks if I would like her and my daughter to wait.

“Oh yes,” I say, “having my family nearby makes me look less like a criminal.”

I hear the half conversation over the phone where the Lady Behind the Counter says, “Uh, $16. Oh, okay,” and then hangs up. “We’ll accept it this time, but next time…”

I’ve turned this exchange over in my head countless times since. How could they have authenticated me better? What sort of losses from this threat vector have they incurred that caused them to implement this program? I never had to get a sticker on my returns from the <big box warehouse store> back in Central Ohio…

I’ve used this story several times since as an illustration of the distinction between auditors and risk professionals. It is absolutely critical that somebody be in charge of checking tickets. You need a ticket to get into the show, or in my case a Sticker. The policy says you need a Sticker, so a Sticker is what’s required. It’s also critical that the person at the door check incoming merchandise and apply a Sticker. The former is the auditor the latter is more akin to IT operations. But what of the person on the phone? Ah! They were the risk manager you see. They understood that a policy violation occurred despite my having a valid receipt and a relatively honest looking face. They could have checked my purchase history to see that I spend A LOT of money at their establishment. Sure, they had video of the incident, but for $16, everyone had better things to do. That is a risk-based decision. That’s just being a human being in a room otherwise full of automatons and making a judgement call that there are better things to spent our finite resources on than less than $20 worth of light bulbs that I likely really did purchase.

This is why the notion of something called a “risk-based audit” is somewhat anathema to me. Sure, please do check controls in areas where there is risk in the business, but that will quickly give rise to the causality dilemma commonly referred to as a chicken or the egg scenario: if the audit is meant to reveal high risk areas, then how could we possibly use risk as in input to scoping the audit (which is the premise of the risk-based audit)?

To bring this back home, let me say that I absolutely want and need somebody issuing and checking tickets at the door. But I’d never mistake them for risk managers. And if you wish to progress in your careers as IT risk professionals, try being the person on the other end of that phone call, and stop sweating the small stuff because somebody’s probably trying to run off with a new TV while your squabbling over light bulbs.

Thus Wastes Man

A discussion on priority-making, risk, and the nature of humanity

I’m always interested in examples where we make implicit risk decisions. It happens naturally all the time, mostly because we lack the resources (time, skills) to properly evaluate the scenario. Despite being good at keeping us immediately out of harm’s way, this quick decision-making skill set (our “gut” reaction) tends to be wrong very often about long-term risk. Nowhere is this more prevalent than in our own health decisions.

The FAIR risk-assessment framework discusses and flowcharts the reasons for failure to comply with policy; however it is equally applicable to failures in decision making. At a high level, the flow chart goes like this: awareness, resources, motivation (evil, dumb, priorities). It’s usually the priorities that throw us for a loop: after I know what needs done, have the tools to do it, I have to want to do it. Since we’re not often evil or dumb (thank goodness), I have to make it a higher priority than the other things I care about. It’s the same reason that although I see the nail pop in my one wall all the time, I’m unlikely to ever really do anything about it (after all, I’m really busy with this blog and everything…).

It’s through these lenses (implicit decision making and the compliance flowchart) that I would like to discuss the following chart:

This is a chart provided by the FAIR Foundation on their website (no relation to the risk analysis method called FAIR). This chart details the US funding priorities for various disease (mostly -all?- NIH funding). I care about many of these diseases personally, as I’m sure many of you do. It’s because of this personal attachment (my gut reaction), that I’m immediately appalled at the funding priorities that exist. If we are being rationale about our resource allocation, then clearly the diseases that cause the most deaths need the highest levels of funding. On closer evaluation however, there is more to diseases than just death; many diseases substantially limit one or more of the major life activities (to borrow a phrase from the US American’s with Disabilities Act of 1990). Diabetes (especially Type 1) robs you of normal eating habits for the rest of your life, Alzheimer’s takes your mental faculties, and Parkinson’s the steals ability to move regularly (to just name a few – there are many horrible outcomes for many of these diseases).

So if we are all rationale humans, then why are these funding priorities what they are?

There’s a certain amount of complexity associated with these decisions. There is a system of systems responsible for these funding decisions, not the least of which is popularity (there are countless discussions like this happening all over the web). However, the reality is that all rubrics for funding will leave some people’s concerns out of the running. There just aren’t enough resources to go around.

I don’t have the right answers for this problem, but I wanted to use these chart as a mirror for our own IT Risk and Security funding priorities. There are doubtless many pet projects that will garner the most funding in your organization that will not have rationale support from a risk perspective. Fighting this gut-level decision making is the work of IT Risk professionals today. The same as the medical communities that argue for a risk-based approach to research funding, you too should be spending your time and efforts advocating for the reduction of risk in the scenarios that effect your organizations.

Given that you will never work for an organization that has in infinite budget for security (or anything really), nor will you have all the time needed to address every concern, you must prioritize efforts to ensure the best results. Priority-making is inherently a risk-based activity. This is the essence of modern risk management.

Private Sector Perspectives on Cyberwar

I sat through a presentation recently about cyberwar. Its a topic that engenders a lot of passion in the information security community. There seems to be a natural line drawn between those with previous experience in the military and government and those with primarily private sector experience. The typical military/government professional will attempt to engender a response from those in private industry. Largely those in private industry yawn (I’m excluding government contractors). And I think this is largely the right response.

Generally speaking, I want those in government to care a lot about war and I want private industry to focus on creating value for investors, customers,  and other stakeholders. A lot of cyberwar discussions talk about “kinetics” or whether there is physical destruction. In large part, most private sector companies will not be able to withstand any sufficiently effective physical attack. This is due to these organizations subscribing (implicitly or explicitly) to the theory of subsidiarity, which states in part that whatever can be accomplished at the lowest and smallest level of endeavor should so be conducted. Clearly, conducting and participating in war (cyber or otherwise, kinetic or not) is not the domain of the private sector. After all, military action is what our taxes fund (amongst other things). There is history of the private sector being targeted by miltary action; taking out communications or other means of production and/or agriculture is a time-tested technique to bring you opponent to their knees. We don’t typically see this kind of technique in modern warfare, but its common to apply pressure to the citizenry in order to force the hands of the political leaders to yield to their enemy’s demands. In my opinion, this is the form in which we will see cyberwar – attacks against the private sector in order to force the hands of politicians.

So back at the presentation, the speaker responded to the seemingly innocuous question of whether or not we could win the cyberwar. He answered this question with a question: have we ever won a war? Well yes, of course we have. I quickly rattled off a few to the colleagues sitting at the table with me: WWII, WWI (although not well enough to avoid WWII), Civil War, heck the Revolutionary War, etc., etc. If the question was meant, or interpreted to mean will we ever not have cyberwar, then clearly the answer is no, but yes, we can of course win wars and skirmishes that may arise in our future. However there will always be an ever-present threat on the horizon that will demand vigilance at some level.

So how do you prepare for these kinds of skirmishes? Well, it depends on the threats you are defending against. Sophisticated nation states will likely represent the 90th, 95th, 0r even 99th percentile of attackers. To be clear, for most organizations, you can spend yourself into oblivion defending against these kinds of attackers. However, the same organizations are likely not doing an effective job of defending against the strength of attackers at even the 50th percentile of attackers. Like all risk assessments, context matters and none more so than cyberwar. Your organizations’ insurance policies probably don’t even cover acts of war, so if you think that cyberwar is a concern for your organizations then you have more exposure in other places. Security is often surprisingly boring, and here is a great example: to defend against that 90th percentile of attacker, you probably have to start by doing a good job defending against the lower-tiered attackers. Focus on patching, currency, and user access. Its boring but has good payoffs. Attend the conference and enjoy the cyberwar talks, but don’t forget the basics.

How to Play


I recently took my daughter to a kid’s birthday party. The location had one of those kid’s gyms where you kick your shoes off and dive into the balls and have a great time. Risk never leaves my mind, so when I was reviewing the sign that was posted over the entrance to the area, I found an interesting parallel that I thought I’d share.

There was a sign posted that said, “How To Play,” followed by what is presumably a list of rules on how to play. The gate was guarded by a disinterested young man sketching on a pad and ostensibly enforcing the rules of play. What were those rules? See for yourself:

  1. No shoes or coats
  2. No running or jumping
  3. No throwing balls

What is missing from these list is exactly what the title of the sign said would be there: rules for playing. Instead, what we have is a list of how NOT to play. While my little one was playing she was having a difficult time getting up some of the ramps in her stockinged feet, so I slipped her socks off and sent her on her way. My wife chastised me because another sign somewhat out of sight indicated that socks were required. The disinterested young man from early failed to notice.

I think there are some clear parallels to corporate security polices in this brief example. First, information security policies rarely identify “How to Play.” Instead, like our sign example above, we frequently find a list of things you are not allowed to do. This is an example of security-centric thinking. Know this: the people in your company are interested in knowing How To Play. Tell them the approved technologies, processes, and systems that they are allowed to use without running afoul of the policy. This is the basic logic of a white vs black list, so help your organization know how to do the right thing (I’m assuming there’s more you don’t want them doing than otherwise, so save time and just tell them what to do).

Next, the metaphor of the disinterested enforcement agent I’m sure is not lost on most. Enforcement is tricky business, and worthy of longer treatment, but for today’s blog post focus on the economics of the situation. There was one guy at the entrance who ostensibly had responsibility for enforcing the rules in the entire area (it was very large with between 30-50 kids). Clearly he was going to fail at 100% enforcement. But just like in other areas of life, its often just as effective to selectively offer enforcement for those areas that are high-risk.

Lastly, don’t forget the allure of the one-stop-shop. Having everything you need someone to know in one place is valuable. Don’t make them hunt for that hidden sign to find out that bare feet are not allowed. Everything should be clearly visible and in one place.

In summary, we as security practitioners can make it easy or hard for people to comply. You get to decide, “How To Play” for your organizations. Choose wisely.