How AI & AR play together

How AI & AR play together

Lately AI (Artificial Intelligence) is everywhere, is the latest trend and is in vogue. That said, while AI appears to be a fad, it is not, as with AR, this tech is here to stay. When Android first came out, Google pioneered the use of AI in mobile devices with the app Google Goggles, way back in 2009.

Time has passed, and things have changed. At that time, it was mandatory to 1st take the photo, so the app can upload the photo to Google servers in order to make further analysis. Today Google Translate app is in charge of text analysis for translation and, Google Lens does a fair job recognizing pretty much anything that you put in front of it. The former still requires taking a photo, the latter does it live, no photo required. BTW, the defunct Amazon Fire Phone also had a similar feature.

AR with AI comes where?

In regards of AR, it does the same, however with a different approach. While AI goes hand in hand with machine learning, AR doesn’t. What’s machine learning? Please see this video. You’re welcome! 🙂 In order to achieve the same result for landmarks, AR uses geolocation technologies, like GPS. For object recognition, it is necessary to first upload the image (whether on the phone device or in the cloud) of the object that is going to be recognized by the AR app.

So, you can see the two approaches for recognition differ, with -obviously- different results. As of this writing AR triggering (object recognition) is more efficient than AI triggering. Why? Because in AR, preprocess happens when the app is built. In case of sole AI, no such preprocess happened for the intended functionality. You see, what current cloud providers do to offer their AI services, is to analyze millions of images. Then, through their API (connection points), app developers can connect to those and get result from that analysis. But because cloud providers can’t know beforehand what would be the final use of the analysis, a generic one is done, and a more specific one is made by the time the app request the information. On the case of current AR technologies, the previous process is done specifically for the intended functionality of the app. For example, the object to be use as trigger, will be analyzed by computers beforehand, by the time the app needs to recognize this object, it will have a very good fingerprint of it. The same with landmarks, GPS locations are orders of magnitudes easier to use than AI object recognition.

Can AI & AR play together?

Of course! They are and they will! Certainly AI enhances a lot of computing tasks, for the moment -though-, many of these tasks, like geolocation AR can still be done without the need of AI. Sticking with the geolocation case, is not the same of taking GPS coordinates, from there triangulate where the user is and augmented his/her surroundings, then to analyze with a camera all of the user’s surroundings and from there analyze each frame (even if considering GPS) and then augment the user’s reality.

AI will get better and faster, but that doesn’t mean -for the moment- it must be adopted immediately for AR, specially if it’s not as accurate. Accuracy for AR is important, otherwise it can’t trigger, hence no AR.

Image source: Dreamstime