Translucence is the property by which a material allows light, but not detailed images, to pass through it. In the same way that a translucent material allows you to see regions of light and shade, a translucent identity allows you to see patterns of identity but not details.
Jon Udell has been talking about this recently and came back to it in a follow post about the politics of data control. I'm interested because translucence is a fundamental principle in the PAOGA approach to identity.
For example Jon, talking about how a company asked for his social security number to do a credit check, says:
At this point, of course, it becomes clear that Prosper shouldnít need to store my encrypted number in its database. It should only need to sign a request to the bureaus for a credit check. The request should then bounce to me, acquire my encrypted Social Security number along with permission for one-time use, and hop along to the bureaus. This protocol wonít work synchronously, but it doesnít have to. If asynchronous message flow gives me the control I want, thatíll be just fine.
Itís time for a public conversation about the uses and limits of translucency. Is it really necessary to retain my Social Security number, or my search history, in order to provide a service? If not, what does it cost the provider of a service -- and cost the user, for that matter -- to achieve the benefit of translucency? Is this kind of opt-out a right that users of services should expect to enjoy for free, or is it a new kind of value-added service that provider can sell?
What Jon is describing is, more or less, the PAOGA architecture. We currently have a semi-translucent (semi because of a resource based constraint that will be addressed soon) database where individuals store their data encrypted with their own unique key. A service provider no longer needs to store this data but can request it from the individual. Taking Jon's example Prosper would not ask for the SSN but would tell the credit reference agency where to request it from and provide suffcient context that the user could tie the request from the agency for their SSN to the request to prosper so that they know to allow it (or change their minds and deny it).
There are implications to this model as Jon suggests. Two of the most critical are:
- Businesses coming around to the mindset that they don't own the individuals data and should be holding as little of it as possible.
- Businesses having flexible, asynchronous, processes that can deal with data-fetch and out-of-band permission requests.
Neither are insignificant. In particular many companies work from a tacit belief that they own the individuals data that they hold and that it's their privilege to exploit it to their maximum advantage. We would suggest that this is short-term thinking though. In the long term building a life-time, trusting, relationship with individuals will deliver more value overall.
Selective disclosure goes hand-in-hand with the principle of translucence and is about the degree to which those patterns of light and dark can be resolved into meaningful shapes and details. PAOGA takes the view that:
- the individual should always have the option to remain anonymous.
- the individual should know to whom they disclose information about themselves.
- the individual should have the option about what information about themselves they wish to release.
- the individual should decide the uses to which their information can be put.
- the individual should have full-recall (i.e. an audit trail) of what has happened with respect to their information.
Implying the following consequences:
- an individual who chooses to remain anonymous or refuse to reveal all requested information might not be able to complete certain transactions.
- an individual who receives a request from another anonymous individual has a problem.
- there are no guarantees about fair use once information is released.
Just to cover that last point again: Once information is released there is nothing that can be done to police how it's used. This is where the individuals responsibility lies: ensuring that they trust the receiving party sufficiently before they release information to them. This is not a software problem.
I've covered the principle of anonymity before but to precis: I am asking for quotes (e.g. to renew my car insurance) from a range of vendors. To give me a basic quote all that any of them need to know about me is general information about the type of vehicle I own, the type of driver I am, and the risk factors of my location. My actual identity (name, address, email, telephone, etc...) need only be revealed to the company I choose, ultimately, to give my business to.
From the perspective of all of these companies I remain a translucent identity until I choose to make it otherwise. If I decide not to reveal myself to any company then I'm not going to get insurance. But that's not the point. The point is that my details have not been revealed to the potentially dozens of companies that didn't get my business. Those details can't then be re-used for other purposes or sold for any reason.
Jon Udell quotes Tim Sloane as saying:
As a consumer this is indeed exactly the type of service I would like to have. It provides me privacy for the personal data (the key) that I send to the (direct) service provider and allows me to acknowledge that I want that key to be used to release my personal data by the secondary service that stores that data (the vault). [Quoting Tim Sloane]
In all such cases, it come down to the same protocol suggested in this week's column: you attach a one-time permission to the protected data. Can the permittee misuse that permission? Sure. It's only a question of whether, on the whole, the benefit of translucency outweighs the costs. It might or might not, I don't know and I doubt anyone does, but what worries me is that we're not seriously trying to find out.
Our experience so far is that business doesn't want to think about this problem. Consider the following:
- Getting accurate information about an individual is very hard, you tend to end up with a bad photofit picture
- Data rots surprisingly quickly leading at risk of making erroneous inferences from a bad dataset
- Holding data is very expensive (how much did your last CRM system cost?)
- Abusing/Losing data can damage brands that are expensive to build
Yet this is exactly what every business is doing today. Furiously building CRM databases and trying to mine the hell out of them.
The alternative seems so simple:
- Ask the individual nicely for the information you need
- Let the individual keep the data up to date themsleves
- Hold as little information yourself as possible
- Play fair with what individuals tell you
- Build a life-time relationship with each individual
The problem is finding any business willing to take a step forward and say "We trust the individual and we think the individual will trust us." What does that tell you about the state of the business world?
PAOGA is attempting to address this problem by bootstrapping consumer interest in taking control of their information. If we can build a big enough community of people who would like the world to work this way we think that services will spring up that want to take advantage and begin to build the momentum that will, ultimately, change how business works forever.
Once you have control why would you ever go back to being a prisoner of vested business interests?
There is also a reluctance to share control. The credit agencies you mention are all in business to service financial institutions, not consumers. Most efforts to provide consumers even rudimentary control over the data that has been collected about them has been refused. In fact, these credit agencies have already rejected the idea that a consumer should be able to confirm if their personal credit rating should be released. The only exception credit agencies have made is when the consumer indicates they believe they are the victim of identity theft -- that is, after the data has been spilled.
Okay this is a biggie, probably the Berlin wall of identity management. But the Berlin wall came down and so to will this problem.
The credit reference agencies are the most entrenched of anti-consumer interests so it's hardly surprising they are resisting any move to the individual being in control. But if you think about it much of the problem of identity fraud can be traced directly to these agencies and their anti-consumer practices.
For example: If I am the gatekeeper of each credit reference check on my identity then I could, myself, tie each incoming request with some transaction I am involved with. Requests coming in out of the blue can be (a) refused and (b) followed up.
Taking a step further; If the information to be held has to go through me then I am well placed to ensure that it is accurate. I don't want information about someone elses misdemeanors sitting in my file. To be sure this does change the game. What happens when I actually am a bad debt? Should I be able to refuse to accept details about loans I failed to repay? Of course not and I'm not trying to suggest that there aren't still problems to solve.
But we've seen the alternative. We're living it. And we're the ones who suffer. Why aren't we more angry about that?
Translucent identity with selective disclosure is possible right now. We have the technology to do it and while there are still problems none of them are insurmountable. If you're interested please get in touch!