One of the chief claims in Helen Nissenbaum’s book, Privacy in Context, is that there is a dimension of privacy that is not captured by analyses such as Warren and Brandeis’s: privacy in public. I can be out in public and yet the information about who I am and where I am going is effectively private. It’s privacy by anonymity. If you follow me or plant a tracking device on me, then this privacy is lost.
There is an analogy with the internet. When we use internet services, we are effectively leaving our private spaces and entering the spaces controlled by companies. In that way, using the internet is like being out in public. Where thigs are going wrong, she thinks, is that we think of ourselves as enjoying the privacy of anonymity. But actually, we’re being tracked and followed.
That is why people worry about the loss of privacy on the internet. It is also why the solutions proposed under the heading of transparency and consent fail to address the problem. We’re being tracked but don’t think of it that way. Various efforts to get us to consent or appreciate what is going on all fail: we don’t read the small print, don’t think it through, or need to use the services enough to sacrifice our privacy.
Nissenbaum proposes that we follow norms of information gathering online that draw on analogies with real life behavior. Virtual stores should act like real stores. As Prof. Brown pointed out, her argument is that those norms have proven adequate to protect privacy. I think Rebecca was also right to suggest that this would make it easier for people to appreciate the ways that they were compromising their privacy.
Both Rebecca and Ian were attracted to the thought that the norms aren’t fundamentally important. Privacy has a determinate value for us, they think, and we develop whatever norms are needed to protect that value.
Ian liked the idea of stripping identifying information from data. So the phone company could retain or maybe sell data about my movements provided they removed any information that would link that data to me. (That’s my example, not his.) Robert was skeptical that this would work. In any event, Ian certainly identified a goal. Privacy advocates would certainly want to investigate whether it is possible to achieve that goal, I think.
Aidan liked a distinction between data collection and data aggregation. He was also especially worried about the state doing the aggregating. And I was interested by the dog that did not bark: no one worried about insurance companies aggregating information. I have to say that in previous years, someone has always been worried about not being able to get health insurance after graduation. I wonder if Obamacare has already achieved its goal: you can’t imagine living in a world where that is something to worry about. If so, hooray, I say.
Rebecca liked a distinction between machines having information about her and people having that information. She doesn’t care if the phone company has computers that track her movements. But she cares a lot about someone looking over those records.
There’s more, but I didn’t get it all down. Great discussion!
It feels as though this example proves Nissenbaum’s point. TVs are being programmed to send information to their manufacturers. And it’s not just the TV shows you watch: the TVs copy information about any files they find on other machines and send that too.
Why does that prove her point? Well, it’s got to be an invasion of privacy and it certainly runs roughshod over the real expectations that people bring to owning an appliance. Who the heck expects their TV to spy on them?