What is the current status of our privacy online and how did we arrive at a situation of ubiquitous tracking and surveillance? Helen Nissenbaum of NYU and Matt Jones of Columbia discussed these question at a public event at Drexel University. Teasel Muir-Harmony, Dissertation Writing Fellow at the Center, provides a brief summary of the event.
How many people have read the entire length of a website’s privacy policy? When Helen Nissenbaum, Professor of Media, Culture and Communication at New York University, asked the audience at the PACHS lecture on “Technology, Privacy and Security,” very few people raised their hands. The audience’s response reflected pretty standard Internet user behavior. These policy statements can seem endless and impenetrable, and so most people either ignore them or skim them and then just hope for the best. Should we be spending more time deciphering the legalese of these lengthy statements? During the question and answer period, Matthew Jones, Professor of Contemporary Civilization at Columbia University, added that even when we do read privacy policies many of us do not fully comprehend the personal risk or long-term implications of clicking “agree.”
Why do we allow online surveillance and analysis? Do we feel differently about government and commercial bulk data collection? Is it okay if Facebook.com keeps track of us but wrong if the National Security Agency (NSA) watches our online activity? Or, is it the other way around? Can we strike a proper balance between data mining and privacy, between national security and transparency? On March 25, Jones and Nissenbaum discussed these timely issues and offered insight into the history and ethics of mass surveillance and assessment.
If we examine the history of data mining, Jones explained, we will be in a better position to understand the current state of global surveillance in national security. Jones argued that it is important to get away from viewing mass hacking, active cyber defense, “owning the net,” and passive listening as part of a necessary historical process. People made choices; technology did not drive the NSA’s global surveillance program. In 2001, the NSA created a vision for itself that placed the institution within a grand historical narrative and framed its extension of the breadth and depth of its telecommunications metadata collection as a necessary national security measure. By examining its genesis in the 1990s, Jones described how this rationalization for massive data mining became thinkable.
Nissenbaum shifted the conversation to the commercial sector and the difficulty of safeguarding online privacy. Increasing transparency and choice has been the predominate solution to protect privacy, but this tactic falls short because it places a significant burden on users. Nissenbaum offered a different approach: “contextual integrity.” Based on the premise that social norms and values govern the various spheres or contexts of our lives, “contextual integrity” suggests that privacy policies of virtual social spheres should reflect our experience and expectation of the non-virtual world. In terms of online privacy, “contextual integrity” would require that websites like Amazon.com collect, aggregate and distribute data about our buying habits in the same way that a brick-and-mortar store would use this information. If a website tailors their privacy policy to fit the norms and values for its appropriate social sphere, we would not be encumbered with reading and understanding dozens of privacy policies.
This discussion left me wondering how our answers to questions about privacy will change over time. As countless social media platforms condition us to be less private, to share pictures of our pets and 140-character descriptions of our lunch, are our notions of privacy evolving? Will we value privacy differently in the future than we value it now? If so, how will our interest in safeguarding privacy change? Will we increasingly accept surveillance and data mining as a necessary part of life?