RSAC 2019 Virtual Pen Testing Slides Available

With RSA completed over two weeks ago, and an ensuing sickness, I realized I haven’t posted about my presentation with Joel Amick. I thoroughly enjoying sharing this work with the RSA audience and had some great conversations afterwards. I think agent-based modeling (ABM) has some interesting use cases in cybersecurity and risk management. I think that in organizations that have data sets about their assets covering control strengths, threats, and losses, there is valid application of ABM to provide some attacker forecasting.

The presentation slides have been posted here. The slides are static and don’t show the video of the model, however the presentation was recorded and the video has been posted on RSAC onDemand for those that attended. When it’s opened to the rest of the world, I will post that.


Presenting on Agent Based Risk Modeling at RSA Conference Next Week

RSA Conference is next week and I’m excited to share that I will be presenting on some work a a colleague and I have done on building an Agent-Based Model (ABM) using FAIR risk data.

This should be an interesting discussion, so please join me next Wednesday at 2:50PM Pacific in Moscone West 2011.

I also served on the program committee this year for the GRC track and I can report that this year’s risk and metrics presentations will be insanely good! You are all in for a treat. If you will be in SF next week for the conference, be sure and look me up.


Jack and Jack talk Risk Modeling at Cyber Risk NA

I had a great time this week at Risk.Net’s Cyber Risk NA conference this week. I moderated a panel on Modeling Cyber Risk with Jack Jones (EVP RiskLens), Ashish Dev (Principal Economist at the Federal Reserve), Manan Rawal (Head of US Model Risk Mgmt, HSBC USA), and Sidhartha Dash (Research Director, Chartis Research).

We only had 45 minutes and ran out of time before we could get to all the topics I had on my list, so I wanted to included some notes here of things we covered:

  • I opened with a scenario where I asked the panelists if they were presenting to the board would it be more honest to disclose the following top risks: 1) IOT, GDPR, and Spectre/Meltdown or 2) Our Top Risk is that we aren’t modeling cyber risk well enough. Most everyone chose option 2 :-)
  • We talked about whether there was a right way to model
    • Poisson, Negative Binomial, Log Normal
    • Frequentist vs Bayesian
  • Which model for scenarios makes more sense: BASEL II categories or CIA Triad?
  • Level of abstraction required for modeling
    • Event funnel: Event of interest vs incident vs loss event
    • Top Down vs. Bottoms Up
  • What are key variables necessary to model cyber risk (everyone agreed that some measure of frequency of loss and impact/magnitude are necessary)

Things we wanted to get to but ran out of time:

  • What is necessary to get modeling approved and validated by Model Risk Management
  • Should you purchase an external model or build your own?
  • Can we use our Cyber Models for stress testing/ CTE calculations?
  • Do we combine cyber scenarios with other operational risk scenarios?
  • One audience question that we ran out of time for was “How was the FAIR approach different than LDA & AMA and how does it address their weaknesses (Frequency and severity correlation)”
    • This was a good question but to be fair, FAIR wasn’t designed to be a stress testing model. However, many of the inputs used for FAIR are also used for LDA and AMA.
  • There were lots of other audience questions about the use of FAIR which is always encouraging!

Cyber Risk Cassandras

I wrote this latest bit for the @ISACA column after reading Richard Clarke’s book and trying to rationalize how it applies to cyber risk. It’s overly easy to predict failures and impending doom at a macro level, its much harder to do it at the micro level, which is infinitely more interesting and useful.

You can read more here

Cyber Deterrence

I was reading up on cyber deterrence today and ran across this little gem in relation to nuclear deterrence:

Because of the value that comes from the ambiguity of what the US may do to an adversary if the acts we seek to deter are carried out, it hurts to portray ourselves as too fully rational and cool-headed. The fact that some elements may appear to be potentially “out of control” can be beneficial to creating and reinforcing fears and doubts within the minds of an adversary’s decision makers. This essential sense of fear is the working force of deterrence. That the US may become irrational and vindictive if its vital interests are attacked should be a part of the national persona we project to all adversaries.

–Essentials of Post Cold War Deterrence (1995)



Using Economics to Diagnose Security Model Failure

asymmetryMany information security practitioners labor daily to increase security for the organizations in which they work. The task itself seems beset with obstacles. On the one hand, there is the need to acquire security funding from executives that are distracted from security by the sturm und drang of the daily operation of the business, tempered by the need to embed long-term strategy in the hearts and minds of its employees. On the other hand is the near-daily obliviousness of the employees they are instructed to protect. They deal with too many clicks on too many phishing emails, accidentally unencrypted emails with government identification numbers attached, and the ever-present push to increase security awareness amongst a group that, at best recognizes that security has something to do with firewalls and at worst, gets in the way of the business generating the revenue its tasked with acquiring. While such a scenario may seem hopeless, it is perhaps better viewed through the lens of economics. Information security economics drive behaviors, decisions, and attitudes concerning the state of security in an organization.

In the dynamic of internal political battles over security funding as well as operations, it’s easy to overlook the other forces at play. Through the lens of economics, we can reveal additional levers that contribute to the decision-making criteria. One of these is the pervasiveness of asymmetric information. For the average consumer, making decisions that increase security is often very difficult, as they lack two things that can assist them in good decision-making. The first is the domain knowledge necessary to understand what good security looks like. The dynamic between the evolution of controls and the nature and skillsets of attackers appears to shifts daily. It requires nothing less than full time devotion to understand these environmental elements in order to make a fully informed decision, which is clearly more than the average consumer has time to devote. The second is the lack of ability to directly observe the environments they are trying to measure. Because they aren’t employed in the security function of the organizations who are offering them security, they are necessarily withheld clandestine information about said security. Information that is vital in coming to an optimal resolution on the state of security for an organization. Often consumers are left to more readily available, yet misleading, indicators of security. These secondary and tertiary, often latent, factors are more difficult from which to correctly derive an accurate measure of security.

An example of this battle of indirectly observable economic factors plays out in the world of financial services and banking. The average consumer may be notified by a bank that their information was in scope of a recent security breach. Such breach notification letters connote action yet offer assurance that any damages the customer may incur will be handled by an insurance provider. What is the customer to do? Should they follow the advice of the letter, that is, do not take action, just monitor their accounts for fraud and rest assured that the firm whom just lost their data will handle things, or should they move their accounts to another provider? Each customer has their own calculus for how to make these decisions. Some will accept the premise of the letter with an uneasy feeling, yet others will stand on moral outrage and move their financial accounts to another provider. Each decision is not without its drawbacks, however. In the former, the customer has to have assessed that while security failed once, it likely won’t again and that if it did fail, the coverage offered will be necessary to offset any damages incurred. Note that in this option, the customer is forced to assess risk (frequency of loss as well as its impact). The latter scenario offers us another unique option. First, the customer has to assess that whatever the damages they have yet to incur, it is greater than the costs of switching accounts, which is not negligible. One must account for the time spent locating new providers and financial advisors, modifying automatic drafts and direct deposits, opening new accounts, and signing paperwork. This time is not trivial and says nothing of the most important factor in making this switch: is the firm that you are moving to more secure than the former? In truth, the average consumer will not know. They may choose a company that was not recently in the news for such problems (relying on secondarily observable, yet still latent, measures of security), but that does not mean that problems have existed in the past or will exist again. Indeed, the security of the new firm is just as opaque as the one the consumer just left. While switching may satiate their moral outrage, in truth it does nothing to aid them in increasing the security of their accounts.

This is but a brief analysis of the role that economics plays in describing the behaviors, decisions, and attitudes of consumers and their security choices. However, it does help to better ascribe actions of large groups of people. For instance, it shows why most consumers won’t switch their business to another financial provider following a breach (repeat offenders, and especially those with failures in quick succession excluded). One may call this kind of behavior irrational, and indeed, many in the security community do just that by predicting wave after wave of defecting customers in a catastrophically spiraling disaster of attrition. Instead, what we see is in direct opposition to what was predicted. It can be said that when such a conflict exists between reality and a model, reality wins. Economic principles, as applied to information security, can help explain why one model has failed, and why another model might be more correct.