Always Mistrust New Risk Equations

There’s a cynical meme out there about mistrusting new (as well as proprietary) encryption methods. Unless its been around long enough to suffer the slings and arrows of academic and practitioner criticism, its probably not worth entrusting your security to it.

I’m hereby extending this in a new corollary:

 

All claims of “new” equations for calculating risk are to be publicly vetted before entrusting your risk management capabilities to it.

To wit, the NIST IR 8062 Draft Privacy Risk Management for Federal Information Systems standard (published 28 May 2015) documents what it describes as a “new equation [that] can calculate the privacy risk of a data action…” Ostensibly this was required as “a privacy risk model that can help organizations identify privacy risk as distinct from security risk requires terminology more suited to the nature of the risk.” They then go on to describe the inadequacy of “vulnerability” and “threat” and how they cannot possibly relate to privacy concerns.

In truth, there is nothing new or novel about the risk equation they propose:

Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action

If this looks familiar its because its reminiscent of every other risk equation out there. It attempts to measure how often bad things happen and when they do, how bad they are likely to be. This is the foundational element of all risk, and it doesn’t take much to show you that this is applicable across a multitude of scenarios: car insurance, life insurance, information security, climate change, medical malpractice, and yes privacy as well. It’s not as if they stumbled across the one unique field of study for which there is no possible way prior work in risk could apply.

Most of their argument rests upon how different privacy is from security, however the concepts apply equally. If we decompose the likelihood side of the equation into how often attempts are made, its easy to see how “threatening” they can be. And note that this doesn’t have to be malicious either. Its easily applicable to accidental scenarios as well. Its certainly “threatening” to an organization if the envelope stuffer misfires and puts a financial statement into a mismatched envelope. Malicious actions designed to compromise the privacy of individuals through information systems is already covered by security risk standards so the distinct characteristics of privacy scenarios is not apparent.

The term “vulnerability” has some disputed usage in security vs. risk, but it works for privacy either way. If you mean “vulnerability” in the sense of “control weakness” or “control deficiency” (such as a missing patch) you will find it works fine for privacy. A series of controls that keep the envelope stuffer from making a mistake could suffer just such a deficiency. But if you mean “vulnerability” in the FAIR sense of answering the question “How vulnerable are we to active attempts to compromise privacy?” then you will find that works as well.

I understand the desire to claim the mantle of creator and inventor, however its sheer folly to ignore the history that has brought us here. There is a well-worn saying in academia about being able to see far because we stand on the shoulders of giants. To me that’s a statement of humility; a reminder to give credit where its due.

The Structural Engineer Saves You from the Architect

I recentlydesign heard the phrase “The structural engineer saves you from the architect.” It was playful banter between two members of the construction and building professions. See, the root of the joke is that the architects will design these fanciful buildings that, while visually appealing, are totally impractical in a way that the structural engineer would only be able to understand. It’s a twist on that old saw about Mars vs. Venus.

While humorous, it reminded me of the control strength measurement approach which asks you to assess the strength of the control design as well as its effectiveness. Let’s assume for a second that you are building a new house (well, commissioning one to be built, but you know what I mean). You meet with an architect and they design you an amazing new home, just superb in all the ways you want, and even exceeds your expectations in so many ways. So you score the design accordingly. Those plans are sent over to the builders and you can’t wait to see that amazing design made real. A couple months later you are invited to a walk-through of your new dream home.

Except it’s not what you hoped. You can see the design through the poor construction, but it’s nothing more than a sad echo of your expectations. In addition to general shoddy construction, problems with the design are made apparent. All the cantilevered balconies are there, but there wasn’t enough room to properly secure the support beams, so the walls beneath are sagging under the excess weight while the beams bow downward. Walls that would serve as load-bearing supports for the home do not have proper reinforcements. Aside from the severe structural problems are minor aesthetics as well. The baseboards have quarter round base shoe, but the base cap doesn’t match. Chair rails are crocked. And on and on.

So let’s assess how satisfied you would be with your new home. What percentage of your happiness would you say is derived from the design and how much from the actual construction? I couldn’t imagine trying to host a housewarming party and reassuring the guests that you are happy with the house because it was so well designed.

The same is true of control strength measurement. Could you imagine having to describe to your customers that you gave yourself credit for how well you designed your controls, even though the implementation left much to be desired? No, in the end, what really matters is how effective it is. Who cares how well it was designed, tell me how good it is at repelling bad guys. In fact, I might even be so bold as to say that I could derive the strength of the control design once I know how well it performs under the stress of attack.

In the same way that it’s the job of the structural engineer to keep the architect honest, it’s the job of the risk analyst to speak the truth to the risk owners. While good intentions are laudable, they do nothing to protect your organization’s data from the wicked actions of others. Good design can help, but only solid execution can protect you.

 

The “Yes, and…” Approach to IT Risk Mgmt

In my January column for @ISACA I talk about the use of a improv technique called “yes, and…” that you can read about here.

The idea is to keep the improv scene going as long as possible by working with your partner versus opposing them. If they propose something, no matter how outlandish, you assume its valid and work with it. This gives you the opportunity to redirect the outcome. However, if you shut down the scene and attempt to wrestle control away from your partner, the scene gets awkward and if you do it enough they tend to not want to work with you anymore.

It’s a metaphor you see: work with the business on their initiatives and you get invited back to the table.

Using Risk to Take the High Road

My @ISACA column for November was published recently. You can read it here.

This was a tough one to write, and not just due to the 200 word max limitation (which I still exceeded). Overall, lots of security professions tend to (I believe) unknowingly speak ill of the management of the companies for which they work. It’s second nature to think that your judgement about security overrides whatever else management is doing. My point with this column was to help people see that risk management defines priority across the organization; in other words, I’m sure that marketing, accounting, sales, etc. think that whatever they are working on is far more important than what security is doing. Thinking about these priorities through a risk lens helps people level-set their work against the rest of the company’s work. I use an outraged “author’s voice” to wake people up to what they are saying and how they express it.

This was difficult to write primarily because I didn’t want to insult anybody, but to also help people understand that the words they use, even amongst other security professionals, are not productive in improving relationships within the rest of the company.

New Journal Article on Supplier Security Assessments

Today an article I worked with help from Kevin Chalk was published in the ISSA Journal. When I am able, I will post the text here for review. It should be in your inbox if you prefer to read e-versions of articles. Not sure when they get mailed out.

It’s a great piece on how to apply some soft skills (and some decidedly not so soft skills) in the furtherance of conducting a supplier review. You will never know as much about the supplier as they know about themselves, so this is a good approach to trying to uncover where there may be a lack of truthfulness in certain responses.

We wrote about using the Reid Technique, which is a standard in law enforcement interviewing and interrogation. Its worth exploring to see if it fits into your own security and risk program.

A Cooperative Model for Security, Audit, and Risk: A collaborative approach to risk-based audits

Information technology audit is a relatively recent addition to the professional world of auditing. A review of the history of IT audit leads one back to the Electronic Data Processing Auditors Association (EDPAA), which is the forerunner of what would eventually become the Information Systems Audit and Control Association (ISACA)1. Although EDPAA published control objectives in the 1970s, what would eventually become ISACA’s flagship publication (Control Objectives for IT; COBIT) was published in 19962. In large part, this publication defines controls for IT systems, but is grounded in the definitions of controls codified by The Committee of Sponsoring Organizations of the Treadway Commission Internal Control-Integrated Framework (COSO)3. Clearly, IT auditing was happening before these organizations codified the practice as reliance upon IT systems was identified as critical to organizational success. Indeed, the authors of the original COBIT document identifies their impetus for creation thusly:

“In recent years, it has become increasingly evident to regulators, lawmakers, users, and service providers that there is a need for a reference framework for security and control in information technology (IT).”2

Continue reading A Cooperative Model for Security, Audit, and Risk: A collaborative approach to risk-based audits

Open Group Podcast on Risk – June 2013

I participated in my second risk management podcast for the Open Group that was published today. I like this one better than my previous one–I tried to talk slower in this one anyways  ;-)

I was happy with the topics that we discussed, most notably that as regulators become more aware of the capabilities of quantitative risk assessment techniques they will begin demanding them from those they are reviewing. Of course, Jack and Jim were great as well and the conversation was expertly moderated by Dana.