Schrödinger’s Christmas

In 1935, Austrian physicist Erwin Schrödinger devised the thought experiment known as Schrödinger’s Cat. It’s a gruesome but pretend experiment where we place a cat in a cage (sometimes a box) with a device that could randomly release a poison that is capable of killing the cat. However, it may also never release the poison and the cat would remain alive. There are many variations to this, such as if you open the box it would release the poison rendering the cat dead, etc. One of the implications of this is that the cat could exist in two states at the same time: both alive and dead. We’d never know for sure unless we open the box, but then we’d be complicit in the cat’s death (I suppose this could be done with any pet, but Erwin must have hated cats).

Over the years, there have been many version and extensions of this thought experiment. One is that when people are aware they are being observed, they behave differently. In what way did their behavior change? We can never know as the observation itself changed the outcome (Heisenberg’s Uncertainty Principle). In security, we rely on this behavioral effect as a preventative control. It’s the reason that home security signs exist: somewhat paradoxically, letting would-be attackers know the level of security that exists in your home serves the purpose of deterring attackers (by revealing our control strength, we make it possible for the attacker to assess their own risk; including evaluating how good their skills are at overcoming our controls). This same concept is in play when we use login banners and periodically remind our users that their activity on company systems is monitored. We let them know that we are watching and in so doing we change the outcome in the hopes that good people remain that way.

Which is exactly what we try to do with our children this time of year. Clearly, the “naughty or nice” list is subject to halo bias; children work harder at being on the nice list in December than in any other month (especially January). However, they are also more aware they are being monitored. We as parents reinforce this verbally, they hear it in carols, and see it in effect in holiday television programming. Austrian and German cultures took this concept of monitoring children’s behavior to another extreme (although admittedly more stick than carrot) with the Krampus (Now a major motion picture!) and Belsnickel characters, who punished naughty children in a horribly violent and terrifying manner. Not to be outdone, Japanese New Year’s ceremonies feature the Namahage character that wears a demonic mask and punishes lazy or bad children into obeying their parents. All of which to say that we have a long historical understanding of the value monitoring plays in regulating behavior. Now I need to get back to work because the Elf of the Shelf is staring at me and I don’t want to end up on the naughty list this year…

Why I Work in Risk

I was always a big fan of Alice in Wonderland. Having read the book several times, you just have to wonder why she goes down the hole at all? Alice abandons all that she knows, all that everyone around her acknowledges as being rational and true, for something that we’re told is simply the pursuit of curiosity. But I never believed that. I think Alice goes down that hole for passion.

Many people come close to finding their life’s passions. But just like when we approach the edge of a cliff, we get that tickle in the back of our head that raises doubts. Don’t get too close, you might fall. And if we are being really honest with ourselves: Don’t get to close, you might jump. And most people do the rationale thing: they pull back and in so doing, they never realize their passion.

Years ago, I was very fortunate to be shown the edge; shown my very own rabbit hole. And I jumped with everything I had. I abandoned rationale thought and left behind all that I knew to be true to pursue a passion. I sacrificed sure things to keep falling down the hole as long as I could. I soon discovered that what I thought I knew was a delusion; the hole became reality and I wanted everyone to know with the surety that I had, what truth really looked like.

But that’s how I got into risk, not why I stayed.

I stay in risk because I’m not done falling. I need to keep finding the next cliff from which to leap. That fear we experience while falling removes the impurities that cloud our vision. It’s only in falling that we can be honest enough with ourselves to allow us to create and I’m not done yet. My head is full of ideas to keep me busy, but if I don’t keep jumping, I will run out. And I never want to run out.

And that’s why I keep working in risk: because I see risk everywhere I look and in everything I do. I live my life in graphs that wax and wane with all I do, reflecting the quality of my decision making, forecasting, and unforeseen events. I measure life’s metabolism to survive and eventually I will fail. And I find it thrilling.

And for a jumper like me, I need those thrills. So I keep looking for the next cliff.



Always Mistrust New Risk Equations

There’s a cynical meme out there about mistrusting new (as well as proprietary) encryption methods. Unless its been around long enough to suffer the slings and arrows of academic and practitioner criticism, its probably not worth entrusting your security to it.

I’m hereby extending this in a new corollary:


All claims of “new” equations for calculating risk are to be publicly vetted before entrusting your risk management capabilities to it.

To wit, the NIST IR 8062 Draft Privacy Risk Management for Federal Information Systems standard (published 28 May 2015) documents what it describes as a “new equation [that] can calculate the privacy risk of a data action…” Ostensibly this was required as “a privacy risk model that can help organizations identify privacy risk as distinct from security risk requires terminology more suited to the nature of the risk.” They then go on to describe the inadequacy of “vulnerability” and “threat” and how they cannot possibly relate to privacy concerns.

In truth, there is nothing new or novel about the risk equation they propose:

Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action

If this looks familiar its because its reminiscent of every other risk equation out there. It attempts to measure how often bad things happen and when they do, how bad they are likely to be. This is the foundational element of all risk, and it doesn’t take much to show you that this is applicable across a multitude of scenarios: car insurance, life insurance, information security, climate change, medical malpractice, and yes privacy as well. It’s not as if they stumbled across the one unique field of study for which there is no possible way prior work in risk could apply.

Most of their argument rests upon how different privacy is from security, however the concepts apply equally. If we decompose the likelihood side of the equation into how often attempts are made, its easy to see how “threatening” they can be. And note that this doesn’t have to be malicious either. Its easily applicable to accidental scenarios as well. Its certainly “threatening” to an organization if the envelope stuffer misfires and puts a financial statement into a mismatched envelope. Malicious actions designed to compromise the privacy of individuals through information systems is already covered by security risk standards so the distinct characteristics of privacy scenarios is not apparent.

The term “vulnerability” has some disputed usage in security vs. risk, but it works for privacy either way. If you mean “vulnerability” in the sense of “control weakness” or “control deficiency” (such as a missing patch) you will find it works fine for privacy. A series of controls that keep the envelope stuffer from making a mistake could suffer just such a deficiency. But if you mean “vulnerability” in the FAIR sense of answering the question “How vulnerable are we to active attempts to compromise privacy?” then you will find that works as well.

I understand the desire to claim the mantle of creator and inventor, however its sheer folly to ignore the history that has brought us here. There is a well-worn saying in academia about being able to see far because we stand on the shoulders of giants. To me that’s a statement of humility; a reminder to give credit where its due.

The Structural Engineer Saves You from the Architect

I recentlydesign heard the phrase “The structural engineer saves you from the architect.” It was playful banter between two members of the construction and building professions. See, the root of the joke is that the architects will design these fanciful buildings that, while visually appealing, are totally impractical in a way that the structural engineer would only be able to understand. It’s a twist on that old saw about Mars vs. Venus.

While humorous, it reminded me of the control strength measurement approach which asks you to assess the strength of the control design as well as its effectiveness. Let’s assume for a second that you are building a new house (well, commissioning one to be built, but you know what I mean). You meet with an architect and they design you an amazing new home, just superb in all the ways you want, and even exceeds your expectations in so many ways. So you score the design accordingly. Those plans are sent over to the builders and you can’t wait to see that amazing design made real. A couple months later you are invited to a walk-through of your new dream home.

Except it’s not what you hoped. You can see the design through the poor construction, but it’s nothing more than a sad echo of your expectations. In addition to general shoddy construction, problems with the design are made apparent. All the cantilevered balconies are there, but there wasn’t enough room to properly secure the support beams, so the walls beneath are sagging under the excess weight while the beams bow downward. Walls that would serve as load-bearing supports for the home do not have proper reinforcements. Aside from the severe structural problems are minor aesthetics as well. The baseboards have quarter round base shoe, but the base cap doesn’t match. Chair rails are crocked. And on and on.

So let’s assess how satisfied you would be with your new home. What percentage of your happiness would you say is derived from the design and how much from the actual construction? I couldn’t imagine trying to host a housewarming party and reassuring the guests that you are happy with the house because it was so well designed.

The same is true of control strength measurement. Could you imagine having to describe to your customers that you gave yourself credit for how well you designed your controls, even though the implementation left much to be desired? No, in the end, what really matters is how effective it is. Who cares how well it was designed, tell me how good it is at repelling bad guys. In fact, I might even be so bold as to say that I could derive the strength of the control design once I know how well it performs under the stress of attack.

In the same way that it’s the job of the structural engineer to keep the architect honest, it’s the job of the risk analyst to speak the truth to the risk owners. While good intentions are laudable, they do nothing to protect your organization’s data from the wicked actions of others. Good design can help, but only solid execution can protect you.


The “Yes, and…” Approach to IT Risk Mgmt

In my January column for @ISACA I talk about the use of a improv technique called “yes, and…” that you can read about here.

The idea is to keep the improv scene going as long as possible by working with your partner versus opposing them. If they propose something, no matter how outlandish, you assume its valid and work with it. This gives you the opportunity to redirect the outcome. However, if you shut down the scene and attempt to wrestle control away from your partner, the scene gets awkward and if you do it enough they tend to not want to work with you anymore.

It’s a metaphor you see: work with the business on their initiatives and you get invited back to the table.

Using Risk to Take the High Road

My @ISACA column for November was published recently. You can read it here.

This was a tough one to write, and not just due to the 200 word max limitation (which I still exceeded). Overall, lots of security professions tend to (I believe) unknowingly speak ill of the management of the companies for which they work. It’s second nature to think that your judgement about security overrides whatever else management is doing. My point with this column was to help people see that risk management defines priority across the organization; in other words, I’m sure that marketing, accounting, sales, etc. think that whatever they are working on is far more important than what security is doing. Thinking about these priorities through a risk lens helps people level-set their work against the rest of the company’s work. I use an outraged “author’s voice” to wake people up to what they are saying and how they express it.

This was difficult to write primarily because I didn’t want to insult anybody, but to also help people understand that the words they use, even amongst other security professionals, are not productive in improving relationships within the rest of the company.

New Journal Article on Supplier Security Assessments

Today an article I worked with help from Kevin Chalk was published in the ISSA Journal. When I am able, I will post the text here for review. It should be in your inbox if you prefer to read e-versions of articles. Not sure when they get mailed out.

It’s a great piece on how to apply some soft skills (and some decidedly not so soft skills) in the furtherance of conducting a supplier review. You will never know as much about the supplier as they know about themselves, so this is a good approach to trying to uncover where there may be a lack of truthfulness in certain responses.

We wrote about using the Reid Technique, which is a standard in law enforcement interviewing and interrogation. Its worth exploring to see if it fits into your own security and risk program.