Jack Jones has created this extraordinary thing in FAIR and it is and will continue to do nothing less than revolutionize our industry. That he decided to share even a little bit of that with me by coauthoring the FAIR book is so incredibly humbling. It’s a gift that I will treasure for the rest of my life.
That I have been good in any way in building risk programs is due entirely to his teachings and mentoring early in my life and I am so incredibly grateful.
One of the best things about the FAIR Institute is the culture of giving back and during my acceptance I offered to anyone that I’d be happy to help them through their journey to risk quantification. I’ll do that again here: if you need support, tips, or just a sympathetic ear while building your risk program, please do reach out. I’d be happy to help :-)
I’m looking forward to participating on this panel discussion at the upcoming FAIR Conference. This is a topic that really speaks to me and I’m looking forward to sharing what I’ve experienced and hearing from the co-panelists about how they’ve accomplished the same.
I really enjoy reading Duncan Watts work and I was blown away by how he assailed the concept of common sense that we all rely upon so readily:
What we don’t realize, however, is that common sense often works just like mythology. By providing ready explanations for whatever particular circumstances the world throws at us, common sense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe.
Questioning our perception of reality is pretty heavy and you can spend a lot of time working through that. But in my article I use this idea to break out of the crutch of using common sense to manage risk.
You can read the full article on the @ISACA Newsletter site here.
I was inspired to write this article by a change in the speed limit that happened on a local Interstate. It was a good jumping off point to illustrate the parallels between speed limits and risk appetite and what it takes to change each.
You can read the article on the FAIR Institute website here.
I had a great time writing this post for the FAIR Institute. I was inspired by post-doc David Levari of the Harvard Business School’s article in The Conversation called Why Your Brain Never Runs out of Problems to Find. In it he talks about how our brains have a sliding scale of what “badness” is over time and how something will always occupy the spot of “badness” even when its not that big of a deal. In my write-up, I apply that to cybersecurity and include some pointers for FAIR practitioners.
My latest @ISACA column was posted recently. This time I tackled a hard issue in the human factors space: awareness training. Specifically, I explored the notion that having a good security team may actually impede the effectiveness of a security awareness program. I did this through the application of some concepts from the bystander effect.
I wrote a piece for RiskLens* recently that talks about how to utilize FAIR for building and justifying an information security budget and strategic initiatives. Its an interesting problem space as there is a need to have the appropriate level of abstraction (program level versus technology level) but its also a very solvable problem to add risk reduction justification to these annual budgetary exercises.
Fun story: one time I did this exercise years ago, I actually rated one initiative as *increasing* risk. It started an interesting discussion but the lesson is that not everything will result in less risk to your organization. Budgeting is a complicated amalgam of math, politics, and priorities; be sure to bolster your budgeting process with some risk arguments.
I was very fortunate to have the opportunity to share my thoughts on KRIs last week on The FAIR Institute’s website. I used the metaphor of Sentinel Species (think canaries in coal mines) to serve as an indicator of risk, but not of risk itself. That important distinction is one that I strongly feel is a difference we aren’t making in our identification and use of KRIs.