To make it possible for more than one user to simultaneously edit the same text, the team developing Google Wave have taken the place of data at the center of Google services a step further. They have detached services and placed them in "robots" on a server. That includes existing core services they have built for their new communication and collaboration platform, and the many applications which they hope the developer community will make of their data-model. Applications which even they admit they can't imagine, but which they hope will be exciting.
They have done this to make it easier to share data. But the way this changes how we think about services and data on the Web is indeed exciting. Their new lexicon suggests it. The fact they are prompted to talk about their new services as "robots" suggests that services have become not so very different to the users themselves. They are treated the same by the data model.
But separating data from services makes possible an even greater leap. If data can be acted on by more than one actor then it becomes possible to imagine data acted on by an unlimited number of actors, even other data. It becomes possible to imagine a "robot" which is not just another disconnected actor on the data, but which consists of interactions between multiple elements of data itself.
The concept of services in Google is already moving in this direction. Data is not just the passive patient of search. It is being used to mediate actions. Take the new spelling correction Robot Spelly, or the new machine translation robot Rosy (Rosetta Stone, geddit? By the way, do I detect an Australian influence in this new explosion of -y suffixes?) These robots not only act on data, but their actions are mediated by statistics culled from millions of other items of data.
Spell checking or translation are obviously quite narrowly focused on text itself, but the search term suggestion service "Google Suggest" indicates more generally useful directions such text mediated services might go in, helping us think about problems, or even making decisions for us. Spam filters already decide much of what we see and what we don't.
At the moment data for Spelly and Rosy is culled off-line into large statistical language models. But by their generalization of services as actors in Google Wave, the potential exists for for some future iteration of Spelly or Rosy to be implemented as direct interactions between the millions of data elements which provide their statistics. This will free them from the limitations of fixed generalizations into statistical models, and give them the true power of data acting on itself: a connection machine or cellular automaton.
We might not be there yet, but it is encouraging the way Google is constantly nudged further-and-further in the direction of placing raw data at the center of their service model, first by making it open to multiple interpretation under search, and now, potentially, progressively, iteratively, by liberating it to act.
But separating data from services makes possible an even greater leap. If data can be acted on by more than one actor then it becomes possible to imagine data acted on by an unlimited number of actors, even other data. It becomes possible to imagine a "robot" which is not just another disconnected actor on the data, but which consists of interactions between multiple elements of data itself.
The concept of services in Google is already moving in this direction. Data is not just the passive patient of search. It is being used to mediate actions. Take the new spelling correction Robot Spelly, or the new machine translation robot Rosy (Rosetta Stone, geddit? By the way, do I detect an Australian influence in this new explosion of -y suffixes?) These robots not only act on data, but their actions are mediated by statistics culled from millions of other items of data.
Spell checking or translation are obviously quite narrowly focused on text itself, but the search term suggestion service "Google Suggest" indicates more generally useful directions such text mediated services might go in, helping us think about problems, or even making decisions for us. Spam filters already decide much of what we see and what we don't.
At the moment data for Spelly and Rosy is culled off-line into large statistical language models. But by their generalization of services as actors in Google Wave, the potential exists for for some future iteration of Spelly or Rosy to be implemented as direct interactions between the millions of data elements which provide their statistics. This will free them from the limitations of fixed generalizations into statistical models, and give them the true power of data acting on itself: a connection machine or cellular automaton.
We might not be there yet, but it is encouraging the way Google is constantly nudged further-and-further in the direction of placing raw data at the center of their service model, first by making it open to multiple interpretation under search, and now, potentially, progressively, iteratively, by liberating it to act.