Accepted at RSA 2019 – Virtual Pen Testing

I’m very pleased to announce that I’ve been accepted to speak again at next year’s RSA Conference. I’m going to be presenting on an Agent Based Model concept using FAIR risk results jointly with my colleague Joel Amick. Joel’s team and my team worked to develop a POC of this work and we can’t wait to share what we developed with you in March!

Here are the details of the session; please be sure to save it to your agenda!

ISSA Journal – The Future of ITRM will be Quantified

The December issue of the ISSA Journal was released and my article on the Future of IT Risk is on the cover. The theme for this month’s Journal is “The Next 10 Years” and I wanted to highlight where I saw the industry going. I begin with a look back on the progress away from ordinal scale, verbal risk labels and project out on where things will go. I cover regulatory, insurance, and customer pressures to quantify as well as outline a path forward where risk quantification can be used as a competitive advantage.

Check it out in your mailbox or read it online now.

RSAC18 Wrap up

I was very honored to have had the chance to share my quantitative cyber risk journey with the broader security community last week at the RSA Conference. My session had over 100 people in attendance (quite a feat at 8AM on a Wednesday!) and the questions and followups were so good they lasted until we were kicked out of the room. The book signing afterwards caused the bookstore to sell out of copies of Measuring and Managing Information Risk.

I shared some more thoughts on the conference with the FAIR Institute here (where you can also read thoughts from other FAIR practitioners). Lastly, my session slides are available here. Be sure to reach out if you are interesting in learning more; I’ve already had one follow-on session with someone who was unable to attend.

 

Accepted at RSA – Quant Risk Implementation

I’m very pleased to announce that my proposal was accepted for this year’s RSA Conference! I’ll be giving an overview of the quantitative risk framework I’ve implemented at my firm, TIAA.

I’ll be speaking Wednesday morning (April 18th) in the Security Strategy Track as an Advanced Topic.

Here is the abstract:

This session will review the Cyber Risk Framework implemented by TIAA that scales from the granular level up to business-level aggregate risk reporting, avoiding some typical pitfalls by avoiding being too narrow or broad. Included in this session are discussions about policy, standards, configuration baselines, quantification, ORM/ERM risk reporting, and project lifecycle engagement. 

FAIR plays a big part in our framework, so you can be sure to have your questions answered about how to implement FAIR in your organization.

Lowest Common Risk Denominator

I tackle the notion of risk appetite in this month’s column using some metaphors with which you might be familiar. You don’t get to pick your auto insurance coverage by expressing the number of accidents you are willing to accept, yet that’s how a lot of organizations think about cyber risk. Fortunately, the cyber insurance industry is going to force us all into thinking about risk in dollars, the same as everyone else, because that is the lowest common risk denominator.

You can read more here.

Always Mistrust New Risk Equations

There’s a cynical meme out there about mistrusting new (as well as proprietary) encryption methods. Unless its been around long enough to suffer the slings and arrows of academic and practitioner criticism, its probably not worth entrusting your security to it.

I’m hereby extending this in a new corollary:

 

All claims of “new” equations for calculating risk are to be publicly vetted before entrusting your risk management capabilities to it.

To wit, the NIST IR 8062 Draft Privacy Risk Management for Federal Information Systems standard (published 28 May 2015) documents what it describes as a “new equation [that] can calculate the privacy risk of a data action…” Ostensibly this was required as “a privacy risk model that can help organizations identify privacy risk as distinct from security risk requires terminology more suited to the nature of the risk.” They then go on to describe the inadequacy of “vulnerability” and “threat” and how they cannot possibly relate to privacy concerns.

In truth, there is nothing new or novel about the risk equation they propose:

Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action

If this looks familiar its because its reminiscent of every other risk equation out there. It attempts to measure how often bad things happen and when they do, how bad they are likely to be. This is the foundational element of all risk, and it doesn’t take much to show you that this is applicable across a multitude of scenarios: car insurance, life insurance, information security, climate change, medical malpractice, and yes privacy as well. It’s not as if they stumbled across the one unique field of study for which there is no possible way prior work in risk could apply.

Most of their argument rests upon how different privacy is from security, however the concepts apply equally. If we decompose the likelihood side of the equation into how often attempts are made, its easy to see how “threatening” they can be. And note that this doesn’t have to be malicious either. Its easily applicable to accidental scenarios as well. Its certainly “threatening” to an organization if the envelope stuffer misfires and puts a financial statement into a mismatched envelope. Malicious actions designed to compromise the privacy of individuals through information systems is already covered by security risk standards so the distinct characteristics of privacy scenarios is not apparent.

The term “vulnerability” has some disputed usage in security vs. risk, but it works for privacy either way. If you mean “vulnerability” in the sense of “control weakness” or “control deficiency” (such as a missing patch) you will find it works fine for privacy. A series of controls that keep the envelope stuffer from making a mistake could suffer just such a deficiency. But if you mean “vulnerability” in the FAIR sense of answering the question “How vulnerable are we to active attempts to compromise privacy?” then you will find that works as well.

I understand the desire to claim the mantle of creator and inventor, however its sheer folly to ignore the history that has brought us here. There is a well-worn saying in academia about being able to see far because we stand on the shoulders of giants. To me that’s a statement of humility; a reminder to give credit where its due.

Speaking at OpRisk North America 2015

ORNA logoIt’s a busy week for me. In addition to the webinar this Friday, next Monday (23 March) I’ll be holding a workshop at 11:00 AM in the Data Quality track of the OpRisk North America conference. I’ll be talking about financial metrics, risk appetite, volatility trends, and scenario analysis. You can’t have quality data without quantification, so that will be a big part of my presentation.