Last week I wrote about how cyber politics and crime on the Internet had been foreseen with eerie accuracy by science fiction writers. For example, the computer scientist Vernor Vinge’s classic 1981 novella “True Names” described the impact of the Net long before most people had personal computers, let alone an Internet connection.
But it’s not just the dark side of the Internet that Mr. Vinge got right.
In his 2006 “Rainbows End,” he sketched a compelling description of the societal impact of augmented reality in which technology evolves to the point where high-resolution displays are integrated into contact lens worn by just about everyone. Combined with powerful lilliputian computers and broadband wireless networks, it becomes possible to customize visual reality by displaying a personalized world by transforming what the wearer is gazing at.
It wasn’t until earlier this week when Michael Lynch, Autonomy founder and chief executive, casually demonstrated “Aurasma” on an iPhone, that I gained a clear sense of how life might end up imitating science fiction. Until now images have been overlaid on top of visual world. Autonomy embeds moving imagery within the world itself, transforming what you see in a way that’s visually convincing.
Aurasma — Mr. Lynch acknowledged that all of the good product names are already taken these days — is based on the company’s IDOL pattern recognizer, squeezed down to run on an iPhone 4.
It requires all of the computing horsepower the hand-held Apple smartphone can muster and it makes it possible to recognize a database of about a half-million objects. The neat trick, however, is it then uses the iPhone’s computing power to correctly “insert” a video image into the scene on the screen of the handset or tablet, complete with convincing 3D accuracy.
A primitive level of augmented reality is already widely available on both iPhones and Androids through software apps like Google Goggles. Augmented reality was pioneered by John Ellenby, who with his son Thomas, co-founded Geovector in the early 1990s to pursue the idea of augmented reality as a navigation tool. Now there are literally hundreds of apps that overlay geographical information on top of the images or maps displayed on smartphones.
However it has been a long and winding road since the first research by the computer scientist Ivan Sutherland, who during the 1960s used head-mounted-display technology as a window into a virtual world. I’ve seen the demos, worn bulky head mounted glasses, and gotten seasick in virtual reality “caves.”
Think of pointing your phone at the advertisement on the side of a bustop and having the ad come to life, complete with interactive features. The best part of the demo came when Mr. Lynch held an iPad up to a copy of a recent New York Times. For everyone who has seen Harry Potter and his magic newspaper, the implications are obvious. The above-the-fold photo of Hillary Clinton at a news conference on the front page springs to life in the form of a video image of the news conference she was speaking at. It’s technically impressive because the video appears to play correctly within the frame of the newspaper page even as the iPad moves about.
Autonomy plans to make Aurasma available as a free application on smartphones next month. For consumers, the first application will be created by a movie studio that is working on an augmented reality game to accompany a new movie. It will be possible to hunt for hidden virtual reality objects in a city. However, by giving the underlying technology away Mr. Lynch is obviously hoping that he has an answer to the frequently asked question: “What comes after Google?”
There have already been dozens of companies who have tried to compete with Google’s search service, so far without success. However, there is also a broad consensus that the future of search will blend next generation search technologies with geographical location.
“We have been convinced for a long time that the idea of typing keywords into a search box is a byproduct and not an end,” said Mr. Lynch. “If you’re truly going to interact between the physical world and the virtual world you’re not going to do that sitting in your bedroom at the keyboard.”
In addition to making the technology available on smartphones and tablets as an application, Autonomy intends to offer a free software module that will allow anyone to build their own application. For example, a store or a shopping mall could customize a version for a shopper tuned to physical objects that they might walk past. Autonomy already has partnerships with telecommunications companies in Europe, Russia and Latin America, Mr. Lynch said and it is negotiating a similar deal in the United States.
Autonomy also plans to create “channels” for advertisers. The company is hoping it can create something similar to Google’s AdWords network in which it is possible for potential advertisers to buy particular objects and have an advertisement displayed whenever that object is recognized. There is also a provision for user-created content, that is likely to create a wild and wooly augmented world and perhaps a new generation of video graffiti artists, if the service takes off.
The system, which does recognition locally in contrast to most other vision and speech recognition systems which pass information to centralized computers in the cloud, will make it possible to customize recognition for a particular environment. However, it will also take advantage of nearly ubiquitous Wi-Fi connections to cache local objects within a 300 to 400 meter range as a handset or tablet is carried in a city.
Mr. Lynch acknowledged that challenging Google, which now has a dominant position in search and a keen interest in augmented reality, is obviously not without risk.
“Google may come up with something great or it may be game on again,” Mr. Lynch said.