In case you’re not aware of it, there’s a new game in town called Ingress, brought to you by Google. Playing is under invitation only, so like many people I haven’t had a chance to play with it. All I know about it, is what I’ve read on the web.
The game has become quite popular in the AR world, because this new medium (AR) is being -again- sponsored by no less than Google itself, on top of their Google Glass project. In typical Google style the invitation model creates scarcity and buzz, hence magnifying the need and popularity to play Ingress. There were some curiosities on my mind, the biggest one was: Why is Google making a game?
After reading that someone got detained by police while playing the game, then it hit me: Google is making AR data mining with the game! That’s one (if not the main) objective. Data Mining from phones is nothing new, however, its interesting they’re doing it this way. If you are still not quite following lets do some regression.
While there could be many definitions around it, specially in the computer field, for the propose of this article lets stick to this one:
gathering enormous amount of information and organizing it.
Which if you’re not aware, is pretty close to Google’s main goal in life: “To organize the world information“, or something like that. Anyway, Google has time a time again has had this challenge, so this is nothing new, and neither is the first time that they turn to people to help them out.
Speech recognition has been quite a challenge for computers for the last several decades, fortunately computers have reached a level of power that they can comprehend what we humans are saying. The problem was (some years ago), that to reach this comprehension the computer needed millions of samples on how people spoke, in order to filter and eventually comprehend what the person was saying. To do this with computers required an enormous data mining project, in which several (if not thousands) of them will record people’s speech in order to leaner how to identify it. Enter 800-Goog-411.
Goog-411 was a telephone number for the U.S., that the company offered for free so people could call in and do searches in whatever they felt to. It was tailored more as a directory assistant service. Needless to say, being free meant being used a lot. By doing this Google was capturing big data of speech. The service ran from 2007 until 2010. The company closed the service once it has enough data on the subject that speech recognition was feasible.
Today Goog-411 lives in the form of Google Now, the speech recognition service. In case you’re wondering, Apple’s Siri is doing the same thing. They’re just a few years behind.
AR content problem
So we now arrive to augmented reality, a field with enormous potential however with zero content (specially on geolocation), mainly because the needed AR data would have to surveyed manually one way or another. Thus, Ingress.
On Ingress people have to identify portals which according to what we know (if you know better, please clarify on the comments) are existing physical landmarks of a city. I would infer that each time a player marks/identifies a portal, this is being registered on Google server, including -of course- all the location data related to it.
But why a doing such a thing?
Well, if you detail the famous project glass video, there’s some AR information being displayed. On the video this is a simulation, but to achieve such a thing in real life, the data about those existing places have to exist. One of the many ways to gather this info is by people doing it themselves. And because of the gamification of things, what better way to incentive this physical survey of landmarks and places into AR than doing it via a game?!
Ingress is the name.