Accepted at RSA – Quant Risk Implementation

I’m very pleased to announce that my proposal was accepted for this year’s RSA Conference! I’ll be giving an overview of the quantitative risk framework I’ve implemented at my firm, TIAA.

I’ll be speaking Wednesday morning (April 18th) in the Security Strategy Track as an Advanced Topic.

Here is the abstract:

This session will review the Cyber Risk Framework implemented by TIAA that scales from the granular level up to business-level aggregate risk reporting, avoiding some typical pitfalls by avoiding being too narrow or broad. Included in this session are discussions about policy, standards, configuration baselines, quantification, ORM/ERM risk reporting, and project lifecycle engagement. 

FAIR plays a big part in our framework, so you can be sure to have your questions answered about how to implement FAIR in your organization.

Lowest Common Risk Denominator

I tackle the notion of risk appetite in this month’s column using some metaphors with which you might be familiar. You don’t get to pick your auto insurance coverage by expressing the number of accidents you are willing to accept, yet that’s how a lot of organizations think about cyber risk. Fortunately, the cyber insurance industry is going to force us all into thinking about risk in dollars, the same as everyone else, because that is the lowest common risk denominator.

You can read more here.

Always Mistrust New Risk Equations

There’s a cynical meme out there about mistrusting new (as well as proprietary) encryption methods. Unless its been around long enough to suffer the slings and arrows of academic and practitioner criticism, its probably not worth entrusting your security to it.

I’m hereby extending this in a new corollary:

 

All claims of “new” equations for calculating risk are to be publicly vetted before entrusting your risk management capabilities to it.

To wit, the NIST IR 8062 Draft Privacy Risk Management for Federal Information Systems standard (published 28 May 2015) documents what it describes as a “new equation [that] can calculate the privacy risk of a data action…” Ostensibly this was required as “a privacy risk model that can help organizations identify privacy risk as distinct from security risk requires terminology more suited to the nature of the risk.” They then go on to describe the inadequacy of “vulnerability” and “threat” and how they cannot possibly relate to privacy concerns.

In truth, there is nothing new or novel about the risk equation they propose:

Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action

If this looks familiar its because its reminiscent of every other risk equation out there. It attempts to measure how often bad things happen and when they do, how bad they are likely to be. This is the foundational element of all risk, and it doesn’t take much to show you that this is applicable across a multitude of scenarios: car insurance, life insurance, information security, climate change, medical malpractice, and yes privacy as well. It’s not as if they stumbled across the one unique field of study for which there is no possible way prior work in risk could apply.

Most of their argument rests upon how different privacy is from security, however the concepts apply equally. If we decompose the likelihood side of the equation into how often attempts are made, its easy to see how “threatening” they can be. And note that this doesn’t have to be malicious either. Its easily applicable to accidental scenarios as well. Its certainly “threatening” to an organization if the envelope stuffer misfires and puts a financial statement into a mismatched envelope. Malicious actions designed to compromise the privacy of individuals through information systems is already covered by security risk standards so the distinct characteristics of privacy scenarios is not apparent.

The term “vulnerability” has some disputed usage in security vs. risk, but it works for privacy either way. If you mean “vulnerability” in the sense of “control weakness” or “control deficiency” (such as a missing patch) you will find it works fine for privacy. A series of controls that keep the envelope stuffer from making a mistake could suffer just such a deficiency. But if you mean “vulnerability” in the FAIR sense of answering the question “How vulnerable are we to active attempts to compromise privacy?” then you will find that works as well.

I understand the desire to claim the mantle of creator and inventor, however its sheer folly to ignore the history that has brought us here. There is a well-worn saying in academia about being able to see far because we stand on the shoulders of giants. To me that’s a statement of humility; a reminder to give credit where its due.

Speaking at OpRisk North America 2015

ORNA logoIt’s a busy week for me. In addition to the webinar this Friday, next Monday (23 March) I’ll be holding a workshop at 11:00 AM in the Data Quality track of the OpRisk North America conference. I’ll be talking about financial metrics, risk appetite, volatility trends, and scenario analysis. You can’t have quality data without quantification, so that will be a big part of my presentation.

Risk relativism is dangerous science

As we close out this year, one thought has been dominating my days. We’ve all learned how to practice risk from different places (where I’ve worked is different from where you’ve worked, etc.). So much in the practice of risk is based on the notion of personality; we do risk one way because I’m leading it today. Tomorrow, a different person leading it would insist on doing it in a way that they are familiar with. Essentially, like all humans, we overemphasize our own experiences in risk as being more “true” than others. In other words, we are more likely to assume that what we’ve experienced before can be made to occur again with the same results. Unfortunately, this is not how science views the same set of experiences. In the scientific method, not everyone’s experience qualifies as valid scientific observation. In a given room full of risk people, the sheer fact that we’ve experienced different things doesn’t make them all valid, at least in a scientific way. Any attempt to include everyone’s experience as objectively valid is incorrect and potentially dangerous. I often refer to this as “risk relativism.”

An overriding characteristic of scientific observation is the ability for different practitioners to be able to recreate the results (reproducibility) given a similar set of environmental variables. In fact, a case study is probably the most accurate research method for what most of us call our work experience. Case study’s are very concerned with “validity:” construct validity, internal validity, external validity, and reliability. The combination of these four things contribute to the overall reproducibility (aka objectivity) of the research.  In all cases, there is a need for using multiple sources of data and/or observations to ensure that each unit of analysis (workplace experience) are “coded” or analyzed accurately. Further, the applicability of results are carefully curtailed. For example, a single-unit case study has limited applicability outside of its own environment (one company’s risk experience is obviously most applicable to just that company), but multiple unit case study’s can be applied more broadly.

But what factors are the most critical for establishing broad applicability and reproducibility? In my opinion, that is the use of an accurate model; that is, the use of a model that can reliably used to predict outcomes. Put another way, across all your industry experience, which model are you applying that allows for meaningful measurements to be made that enables effective comparisons? Work in information theory tells us that all measurements are inexact (statistical if you will). This lends great credence to the use of statistical methods to reduce uncertainty in our measurements as we move from workplace to workplace.

What are you going to do in 2015 to increase your use of scientifically valid models of measurement?