25
Aug 09

Privacy: A CC-like Approach, and Why it’s important to Free Network Services

fine print fail

fine print fail

One Friday this summer, me and the other Creative Commons interns went to Stanford Law School for a talk and discussion led by Ryan Calo of the Center for Internet and Society. We joined with interns working at Free Culture-type orgs around the bay area and had a really interesting discussion about privacy policies and notice.

The Issues:

  • Privacy policies are unclear. They are written in legalese that laypersons can’t understand.
  • Privacy Policies are unreasonable. Because people don’t read them and because “users” have no alternatives, companies are free to retain the right to do whatever they want with your information
  • Privacy Policies are non-negotiable. You can either accept it or refuse to use the product/service. There is hardly ever an alternative product/service with a more liberal privacy policy.

In response to these issues, a few things have been proposed. There were two parallel but different approaches that each involved what I think of as a Creative Commons-like model (It’s worth noting as well that Ryan had another fascinating idea involving incorporating human avatars into interfaces, about which he has a blog post). Essentially, the two ideas, as i recall them, broke down thus:

  1. The user could brand her content with the privacy options that she wants, with some sort of badge.
  2. The service could brand its privacy policy with some sort of human-readable badge or notice.

The former seems more difficult to successfully implement—you would need all participating services to comply. I’m more interested in the latter proposal, mostly because it seems so elegant in its simplicity. I’m imagining shamelessly copying some aspects of Creative Commons licenses:

  • Three-tiered views: lawyer-readable legalese, human-readable plain english (in simple, bullet-pointed terms), and machine readable metadata (RDFa or something).
  • Standardization: all privacy policies generated from a set of more-or-less on/off switches, like CC’s commercial/noncommercial, remix/no-derivs, copyleft/noncopyleft.

My idea is that the service-provider would go to mycoolprivacypolicy.com, and use a simple interface like the CC license chooser to piece together their privacy policy. They would be given the legal code as well as the machine and human code.

The readability issue with privacy policies is solved by the the human-readable code. The unreasonability and non-negotiability of these privacy policies is also helped, but less directly.

With the two characteristics I outlined above, you could imagine browser plugins that allowed users to engage in a dialogue with the privacy implications of their browsing. For example, you could tell your browser to notify you whenever you were on a website that reserved the right to use your information for promotional purposes. You could have it remember when a privacy policy stipulates that it can change at any time, and alert you when a change occurs. Basically, this adds up to a system where people take privacy policies seriously again—where they are actually read and thought about. When people are paying attention to privacy, services will compete over it, and users will win. In other words, more reasonable privacy policies will crop up because services will want to be the first to Truly Respect Your Privacy ™, which will help with the unreasonability issue was well as the negotiability issue (policies won’t actually be negotiable, but users will have choices).

Perhaps the first step in implementing such a system is figuring out the standard for these privacy policies. In other words, what are the yes/no questions that need to be answered in order to build a full privacy policy. Perhaps services require the ability to have different answers for different pieces of data? I might write here again soon with a first stab at such a list.

P3P is a (now defunct?) project that i really ought to research further, but basically seems to be exactly what i’m discussing here. It might include the necessary standards that I just mentioned.

If P3P is now defunct, why did it fail? As I recall from our conversation that friday, the answer was “nobody implemented it.” I’d like to close with this thought: perhaps we are at a unique moment where P3P or something similar is about to have many great opportunities to be adopted, if the right people talk about it soon. Let me explain.

During my last week in San Francisco, I saw Evan Prodromou of identi.ca and autonomous, as well as my boss Nathan Yergler and Google’s Chris DiBona, speak at CC Salon SF. Evan talked specifically about Free Network Services, and one thing that he said that really struck me with its blunt simplicity was that we need to basically clone all networking websites … twitter, facebook, dopplr, digg, last.fm … everything. Before you accuse Evan of trivializing the development of Free Software, I should note that he also said that we could make this process fun and improve on these services in ways beyond simply making them free. Indeed, the project is already under way, with sites like identi.ca and libre.fm already picking up steam, and mumblings about many others floating around.

Perhaps privacy is relevant enough to computing freedom that it ought to be included in any sort of definition of a Free Network Service. Perhaps not. Either way, there is certainly a great deal of overlap. Libre.fm even devotes (at the time of writing) almost half of its home page to a statement about its liberal privacy policy.

My point is that if we’re going to be rebuilding the social web right now—and we are—then we ought to make sure that it ships with a “solution” to privacy. We need to make discussions about a P3P-like system part of our discussions about Free Network Services.