Always Mistrust New Risk Equations

There’s a cynical meme out there about mistrusting new (as well as proprietary) encryption methods. Unless its been around long enough to suffer the slings and arrows of academic and practitioner criticism, its probably not worth entrusting your security to it.

I’m hereby extending this in a new corollary:

 

All claims of “new” equations for calculating risk are to be publicly vetted before entrusting your risk management capabilities to it.

To wit, the NIST IR 8062 Draft Privacy Risk Management for Federal Information Systems standard (published 28 May 2015) documents what it describes as a “new equation [that] can calculate the privacy risk of a data action…” Ostensibly this was required as “a privacy risk model that can help organizations identify privacy risk as distinct from security risk requires terminology more suited to the nature of the risk.” They then go on to describe the inadequacy of “vulnerability” and “threat” and how they cannot possibly relate to privacy concerns.

In truth, there is nothing new or novel about the risk equation they propose:

Privacy Risk = Likelihood of a problematic data action X Impact of a problematic data action

If this looks familiar its because its reminiscent of every other risk equation out there. It attempts to measure how often bad things happen and when they do, how bad they are likely to be. This is the foundational element of all risk, and it doesn’t take much to show you that this is applicable across a multitude of scenarios: car insurance, life insurance, information security, climate change, medical malpractice, and yes privacy as well. It’s not as if they stumbled across the one unique field of study for which there is no possible way prior work in risk could apply.

Most of their argument rests upon how different privacy is from security, however the concepts apply equally. If we decompose the likelihood side of the equation into how often attempts are made, its easy to see how “threatening” they can be. And note that this doesn’t have to be malicious either. Its easily applicable to accidental scenarios as well. Its certainly “threatening” to an organization if the envelope stuffer misfires and puts a financial statement into a mismatched envelope. Malicious actions designed to compromise the privacy of individuals through information systems is already covered by security risk standards so the distinct characteristics of privacy scenarios is not apparent.

The term “vulnerability” has some disputed usage in security vs. risk, but it works for privacy either way. If you mean “vulnerability” in the sense of “control weakness” or “control deficiency” (such as a missing patch) you will find it works fine for privacy. A series of controls that keep the envelope stuffer from making a mistake could suffer just such a deficiency. But if you mean “vulnerability” in the FAIR sense of answering the question “How vulnerable are we to active attempts to compromise privacy?” then you will find that works as well.

I understand the desire to claim the mantle of creator and inventor, however its sheer folly to ignore the history that has brought us here. There is a well-worn saying in academia about being able to see far because we stand on the shoulders of giants. To me that’s a statement of humility; a reminder to give credit where its due.

Despite all my rage…

rat

I recently had the privilege to have some discussions with fellow members of a privacy-oriented group. They were mostly lawyers, and after a series of discussions we waded into the current disapprovals over Nordstrom’s practice of tracking people by Wifi (see here for more on this). Basically  its the implied consent that seems to be getting people up in arms. That and this natural tendency to get riled up about technological-based tracking in general. I interjected that this really isn’t very different from just tracking customers by camera and reviewing the tapes after the fact. Admittedly the automated element here makes this slightly different, but at its base, its still the same to me–after all, are you consenting to be recorded as you walk through the store?  No, its implied and we’ve all mostly moved beyond our concerns about being recorded. But then I remembered something much more central to this debate! Allow me to paint a picture.
A very good friend of mine from college (and high school actually) was an electrical engineering major. He had a job with a company that made lab rat cages. They sold to pharmaceutical companies, universities,  you know, any place that needed something to put their white, red-eyed rats into. So why did they need an EE on staff? Well, his job was to design a monitoring solution for these cages. He configured a USB camera to record the rats, then wrote some software that divided the camera’s field of vision into a grid. When the software detected movement in one of the grids, it incremented a counter and provided some reporting capabilities  Researchers would use this to determine how often the rats went to the water dish, spent time at the food bowl, hit the “gym” wheel, etc.

There is absolutely nothing stopping an existing retailer from applying this technological approach (which is approaching two decades old now) using nothing more than the surveillance videos already in place. I’m willing to wager this is current state for a lot of retailers.

So really, let’s put our big boy and girl pants on and don our risk hats. Look at this holistically — if I configure a wireless access point to record requests for attachments by MAC address then I correlate some logs between various devices, its really no different that them tracking you like a rat in their cage. I mean store.

I think a lot of the privacy industry is invested in outrage–that is, greeting all new technological advances and permutations of common practice as an outright infringement of natural law and civil rights. As always, it falls upon the risk profession to act as the saucer–cool the hot coffee of others into a productive risk discussion.