Supporting The Case Against Data Lock-in

A few days ago the Data Liberation Front, a team created by Google apparently to lower lock-in on data that its services generate on  users , has publicly clarified its mission (note: the DLF has been started in 2007, made public in 2009, thanks Brian for pointing it out!)

The long and extremely significant post however was quite understalked by the general public. The issue of data lock-in is not, of course, much interesting to average usera and this, in my opinion, makes BigG initiative even more commendable.
Datea the lock-in has been, for years, argument in the disputes held in CIO and CTO offices all over the world, during the years in which proprietary software Free and Open source software contained the spot.

A few months ago then the boom of privacy debate regarding Facebook started a very open discussion that brought under the light – thanks to the Diaspora phenomenon – the issue of ethical treatment, property and interoperability of data.

I quote here a series of extracts from the announcement Data Liberation made  few days ago:

“We’ve found that an incredibly effective—although certainly counterintuitive—way to earn and maintain user trust is to make it easy for users to leave your product with their data in tow.  This not only prevents lock-in and engenders trust, but also forces your team to innovate and compete on technical merit.”

  • Breaking data locks and facilitate their withdrawal from users, increases the confidence in the provider. Ethical behavior is identified as a strategy for customer retention.

Locking users in may suppress a company’s need to innovate as rapidly as possible […] This makes your product vulnerable to other companies that innovate at a faster rate. […] If you don’t—or can’t—lock your users in, the best way to compete is to innovate at a breakneck pace

  • Ease user base migration to drive innovation, force yourself to innovate: at the end of the day to force competitors to innovate or die too.

The point is that users should be in control of their data, which means they need an easy way of accessing it. Providing an API or the ability to download 5,000 photos one at a time doesn’t exactly make it easy for your average user to move data in or out of a product. From the user-interface point of view, users should see data liberation merely as a set of buttons for import and export of all data in a product.

  • Everything must be done with a click or two: what’s the point of providing an API? Not everyone knows how to do by herself. The web itself is going in the  same direction. We’ll see how this pursuit of ease of use later will introduce non-trivial issues.

[…]data liberation is best provided through APIs, and data portability is best provided by building code using those APIs to perform cloud-to-cloud migration. […] It increases the likelihood of total success and is an all-around better experience for the user. True cloud-to-cloud portability, however, works only when each cloud provides a liberated API for all of the user’s data. We think that cloud-to-cloud portability is really good for users, and it’s a tenet of the Data Liberation Front.

  • Cloud-to-cloud migration. Since the data amount generated by the use of services can easily have non negligible size (think of a picture sharing service, a social network or e-mail service), the issue of managing them can become very difficult to handle for a user. Cloud-to-cloud according to the DLF, is more secure, and reliabile since you can move data smoothly from one server farm to another without going through the hard disk or, in general, user’s infrastructures that could easily be the weak link of the chain.

I wonder what’s the point of physically moving data when they can be easily queried over the network. Who forbids me to let Google manage my personal data and content of my blog, in a standard format, and make them visible for queryes and production through WordPress?

Sure, today it is still not clear what the business model of personal data hosting services: in fact, this approach would upset the balance between costs of data management and revenue potential deriving from the exploitation of these as the backbone of most part of the digital services today.

The bravest can face the no so simple task of managing their personal information completely locally and personally by buying storage and building procedures, implementing their “seed” of personal data, and make everything publicly accessible via the web in complete security. They’ld still being hoping that lightnings, earthquakes or more banally a thief does not burn their last 10 years of life online, unless having also implement a disaster recovery logic.
One certain thing is that killing of data lock-in will be the first step towards the commoditization of the user base. Being able to convince the user only on the basis of product  quality  will give the green light to a grand stage for real innovation, a new evidence that we live in a time of Singularity.

Related Articles

Clicca per leggere il resto per il testo in italiano.

Tweet thisTweet this

read the rest of this entry for the italian translation (clicca Read the rest of this entry »per leggere il post in Italiano)

In supporto alla guerra contro il Data Lock-in

Qualche giorno fa il Data Liberation Front, un team creato da Google, apparentemente per abbattere il lock-in che i suoi servizi generano sui dati degli utenti, ha spiegato in dettaglio la sua missione al grande pubblico (in realtà il DLF ha iniziato la sua attività nel 2007 ed è stato reso pubblico nel 2009, grazie Brian per la correzione!)
Il post, lungo e estremamente significativo é passato, tuttavia, piuttosto inosservato al grande pubblico. La tematica del data lock-in non è, certamente, più di tanto comune nei pensieri dell’utente medio. Proprio questo, a mio parere, rende l’iniziativa di BigG ancora più encomiabile.

Il problema del data lock-in, ha fatto parte per anni delle discussioni che si tenevano negli uffici dei CIO e CTO di tutto il mondo, negli anni in cui software proprietario e software Libero e open source si contenevano la scena.
Qualche mese fa poi, il boom della discussione sui problemi di privacy su Facebook ha dato inizio a una discussione molto aperta che ha portato sotto gli occhi di tutti, anche grazie al fenomeno (per ora mediatico) Diaspora, il tema del trattamento etico, della proprietà e dell’interoperabilità dei dati.
Riporto qui una serie di estratti dell’announcement di qualche giorno fa:

We’ve found that an incredibly effective—although certainly counterintuitive—way to earn and maintain user trust is to make it easy for users to leave your product with their data in tow.  This not only prevents lock-in and engenders trust, but also forces your team to innovate and compete on technical merit.

  • Abbattere il lock in sui dati e facilitarne il prelievo da parte degli utenti aumenta la fiducia verso il provider. Un comportamento etico viene individuato come strategia di customer retention.

Locking users in may suppress a company’s need to innovate as rapidly as possible […] This makes your product vulnerable to other companies that innovate at a faster rate. […] If you don’t—or can’t—lock your users in, the best way to compete is to innovate at a breakneck pace

  • Facilitare la migrazione della user base per stimolare l’innovazione, costringersi alla’innovazione: alla fine della fiera, per costringere i competitori a innovare o morire.

The point is that users should be in control of their data, which means they need an easy way of accessing it. Providing an API or the ability to download 5,000 photos one at a time doesn’t exactly make it easy for your average user to move data in or out of a product. From the user-interface point of view, users should see data liberation merely as a set of buttons for import and export of all data in a product.

  • Tutto deve essere fatto con un click o due: che senso ha fornire una API? Non tutti sanno programmare e il web stesso va in questa direzione. Vedremo come questa ricerca della facilità d’uso poi, introdurrà temi non banali.

[…]data liberation is best provided through APIs, and data portability is best provided by building code using those APIs to perform cloud-to-cloud migration. […] It increases the likelihood of total success and is an all-around better experience for the user. True cloud-to-cloud portability, however, works only when each cloud provides a liberated API for all of the user’s data. We think that cloud-to-cloud portability is really good for users, and it’s a tenet of the Data Liberation Front.

  • Cloud-to-cloud migration. Poichè i dati generati ulizzando servizi sul web possono facilmente avere di dimensioni non banali (si pensi a un servizio di picture sharing, un social network o un servizio di posta elettronica) la tematica della gestione degli stessi può diventare veramente difficile da gestire per un utente. Mediante migrazioni cloud-to-cloud secondo il DLF, si possono implementare migrazioni sicure, affidabili che non mettano i dati in pericolo e spostino gli stessi da una server farm a un’altra senza passare attraverso l’hard disk o, comunque, l’infrastruttura dell’utente che potrebbe costituire l’anello debole.

Mi chiedo: che senso ha a quel punto spostare fisicamente i dati quando gli stessi possono essere facilmente interrogati attraverso la rete? chi mi vieta di lasciare a Google la gestione dei miei dati personali e dei contenuti del mio blog, secondo un formato standard, e renderli visibili per l’interrogazione e la produzione attraverso WordPress?

Certo, ad oggi ancora non è chiaro quale sarà il modello di business dei personal data hosting services: questo approccio romperebbe l’equilibrio tra costi di gestione dei dati e potenzialità di guadagno per lo sfruttamento degli stessi che è il cardine di gran parte dell’economia digitale dei servizi di oggi.

I più coraggiosi potranno tentare il non semplice task di gestire i loro dati personali completamente in locale, acquistando storage, implementando il loro seed dei dati personali, e rendere il tutto publicamente interrogabile tramite il web e in sicurezza, sperando che un fulmine, un terremoto o un ladro non gli bruci gli ultimi 10 anni di vita online, a meno di aver implementato anche una logica di disaster recovery.
Una cosa certa è che l’abbattimento del data lock-in sarà il primo passo nella direzione della commoditizzazione della user base. Il poter convincere l’utente solo sulla base della qualità dei prodotti che si offrono darà il via libera a una fase imponente di innovazione reale, una nuova testimonianza del fatto che viviamo in tempi di Singolarità.

Advertisements

About meedabyte

Strategist, Consultant and Collaborative Pathfinder

One comment

  1. Pingback: You are “free”: the threat of business commoditization and the role of customer lock-in «

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: