Today I was notified that I was named as an IAPP Fellow of Information Privacy. I’m honored and humbled to be a part of this organization’s inaugural class of Fellows.
You can read more about this designation here.
The list of IAPP Fellows is here.
In 1935, Austrian physicist Erwin Schrödinger devised the thought experiment known as Schrödinger’s Cat. It’s a gruesome but pretend experiment where we place a cat in a cage (sometimes a box) with a device that could randomly release a poison that is capable of killing the cat. However, it may also never release the poison and the cat would remain alive. There are many variations to this, such as if you open the box it would release the poison rendering the cat dead, etc. One of the implications of this is that the cat could exist in two states at the same time: both alive and dead. We’d never know for sure unless we open the box, but then we’d be complicit in the cat’s death (I suppose this could be done with any pet, but Erwin must have hated cats).
Over the years, there have been many version and extensions of this thought experiment. One is that when people are aware they are being observed, they behave differently. In what way did their behavior change? We can never know as the observation itself changed the outcome (Heisenberg’s Uncertainty Principle). In security, we rely on this behavioral effect as a preventative control. It’s the reason that home security signs exist: somewhat paradoxically, letting would-be attackers know the level of security that exists in your home serves the purpose of deterring attackers (by revealing our control strength, we make it possible for the attacker to assess their own risk; including evaluating how good their skills are at overcoming our controls). This same concept is in play when we use login banners and periodically remind our users that their activity on company systems is monitored. We let them know that we are watching and in so doing we change the outcome in the hopes that good people remain that way.
Which is exactly what we try to do with our children this time of year. Clearly, the “naughty or nice” list is subject to halo bias; children work harder at being on the nice list in December than in any other month (especially January). However, they are also more aware they are being monitored. We as parents reinforce this verbally, they hear it in carols, and see it in effect in holiday television programming. Austrian and German cultures took this concept of monitoring children’s behavior to another extreme (although admittedly more stick than carrot) with the Krampus (Now a major motion picture!) and Belsnickel characters, who punished naughty children in a horribly violent and terrifying manner. Not to be outdone, Japanese New Year’s ceremonies feature the Namahage character that wears a demonic mask and punishes lazy or bad children into obeying their parents. All of which to say that we have a long historical understanding of the value monitoring plays in regulating behavior. Now I need to get back to work because the Elf of the Shelf is staring at me and I don’t want to end up on the naughty list this year…
I recently had the privilege to have some discussions with fellow members of a privacy-oriented group. They were mostly lawyers, and after a series of discussions we waded into the current disapprovals over Nordstrom’s practice of tracking people by Wifi (see here for more on this). Basically its the implied consent that seems to be getting people up in arms. That and this natural tendency to get riled up about technological-based tracking in general. I interjected that this really isn’t very different from just tracking customers by camera and reviewing the tapes after the fact. Admittedly the automated element here makes this slightly different, but at its base, its still the same to me–after all, are you consenting to be recorded as you walk through the store? No, its implied and we’ve all mostly moved beyond our concerns about being recorded. But then I remembered something much more central to this debate! Allow me to paint a picture.
A very good friend of mine from college (and high school actually) was an electrical engineering major. He had a job with a company that made lab rat cages. They sold to pharmaceutical companies, universities, you know, any place that needed something to put their white, red-eyed rats into. So why did they need an EE on staff? Well, his job was to design a monitoring solution for these cages. He configured a USB camera to record the rats, then wrote some software that divided the camera’s field of vision into a grid. When the software detected movement in one of the grids, it incremented a counter and provided some reporting capabilities Researchers would use this to determine how often the rats went to the water dish, spent time at the food bowl, hit the “gym” wheel, etc.
There is absolutely nothing stopping an existing retailer from applying this technological approach (which is approaching two decades old now) using nothing more than the surveillance videos already in place. I’m willing to wager this is current state for a lot of retailers.
So really, let’s put our big boy and girl pants on and don our risk hats. Look at this holistically — if I configure a wireless access point to record requests for attachments by MAC address then I correlate some logs between various devices, its really no different that them tracking you like a rat in their cage. I mean store.
I think a lot of the privacy industry is invested in outrage–that is, greeting all new technological advances and permutations of common practice as an outright infringement of natural law and civil rights. As always, it falls upon the risk profession to act as the saucer–cool the hot coffee of others into a productive risk discussion.