I had a great time this week at Risk.Net’s Cyber Risk NA conference this week. I moderated a panel on Modeling Cyber Risk with Jack Jones (EVP RiskLens), Ashish Dev (Principal Economist at the Federal Reserve), Manan Rawal (Head of US Model Risk Mgmt, HSBC USA), and Sidhartha Dash (Research Director, Chartis Research).
We only had 45 minutes and ran out of time before we could get to all the topics I had on my list, so I wanted to included some notes here of things we covered:
- I opened with a scenario where I asked the panelists if they were presenting to the board would it be more honest to disclose the following top risks: 1) IOT, GDPR, and Spectre/Meltdown or 2) Our Top Risk is that we aren’t modeling cyber risk well enough. Most everyone chose option 2 :-)
- We talked about whether there was a right way to model
- Poisson, Negative Binomial, Log Normal
- Frequentist vs Bayesian
- Which model for scenarios makes more sense: BASEL II categories or CIA Triad?
- Level of abstraction required for modeling
- Event funnel: Event of interest vs incident vs loss event
- Top Down vs. Bottoms Up
- What are key variables necessary to model cyber risk (everyone agreed that some measure of frequency of loss and impact/magnitude are necessary)
Things we wanted to get to but ran out of time:
- What is necessary to get modeling approved and validated by Model Risk Management
- Should you purchase an external model or build your own?
- Can we use our Cyber Models for stress testing/ CTE calculations?
- Do we combine cyber scenarios with other operational risk scenarios?
- One audience question that we ran out of time for was “How was the FAIR approach different than LDA & AMA and how does it address their weaknesses (Frequency and severity correlation)”
- This was a good question but to be fair, FAIR wasn’t designed to be a stress testing model. However, many of the inputs used for FAIR are also used for LDA and AMA.
- There were lots of other audience questions about the use of FAIR which is always encouraging!
I’m very pleased to announce that my proposal was accepted for this year’s RSA Conference! I’ll be giving an overview of the quantitative risk framework I’ve implemented at my firm, TIAA.
I’ll be speaking Wednesday morning (April 18th) in the Security Strategy Track as an Advanced Topic.
Here is the abstract:
This session will review the Cyber Risk Framework implemented by TIAA that scales from the granular level up to business-level aggregate risk reporting, avoiding some typical pitfalls by avoiding being too narrow or broad. Included in this session are discussions about policy, standards, configuration baselines, quantification, ORM/ERM risk reporting, and project lifecycle engagement.
FAIR plays a big part in our framework, so you can be sure to have your questions answered about how to implement FAIR in your organization.
I’ve got a busy October speaking calendar this year!
I will be participating on a panel discussion at the inaugural FAIR Conference this year, as well as signing books with Jack Jones.
Should be a fun time! Be sure and stop by to say hello!
The final post of the interview/blog series I did with the FAIR Institute was posted last night.
Part 2 of the interview/blog series I did with the FAIR Institute was posting this morning.
The folks over at the FAIR Institute were nice enough to interview me recently and turn it into a series of blog posts. Part 1 is up right now and sets the stage for how to assess quality in your Cyber Risk assessments.
Earlier this year my good friend Jack Jones and I entered into a contract with Elsevier imprint Butterworth-Heinemann to write a book on the risk assessment methodology FAIR. We will deliver the final manuscript in the fist quarter of 2014 and it should be in print next summer/fall. The title of the book is tentatively called Measuring and Managing Information Risk: A FAIR Approach.
It is a real honor to be able to write about a topic I love with the industry visionary that taught me how to do it.
From the beginning, when Jack and I first began talking about this book (over dinner in the early summer of 2012) we wanted to write a conversational book to teach risk practitioners how to do FAIR. We didn’t want to write a risk textbook, and to be sure this is not a math book. It is very much intended to be an accessible book to help people understand how to take the work they are currently doing in risk management and improve the results quickly using applied methods and techniques. And don’t worry: our trademark senses of humor will be firmly intact throughout (my tongue always seems to be firmly ensconced in my cheek).
This book has been a long time coming. FAIR has evolved significantly since Jack Jones first published the FAIR whitepaper in 2005. Jack and I have conducted numerous FAIR training sessions and classes that detail the evolution of this now industry standard, but one thing is still a challenge for many people: how to apply FAIR to the daily security scenarios with which they are faced. This book will describe various scenarios to help lift the fog and give people “Ah-Ha” moments as they will quickly find examples that emulate current scenarios they are facing, or application techniques they can use to help better model the risk they are currently modeling in FAIR. We are even taking it further by showing you how to present risk scenarios to management and how to integrate FAIR into many popular risk assessment standards (NIST, ISO, etc.).
When you are done reading this book, you will know how to apply FAIR anywhere to model the risk associated with virtually anything. And it will also be a great reference for those looking to earn the Open Group’s upcoming FAIR Risk Analyst certification.
So naturally writing a book takes a lot of work so that’s why my writing here has been sketchy these past couple months. But in the meantime, you can get a preview of the book at a blog post that Jack wrote to better understand the concepts around what risk management is and how to practice it.