Google says we are shifting “from a mobile-first to an AI-first world,” in the words of CEO Sundar Pichai, and Google is helping to move things along, as Pichai details in a blog post for the company.
One of these fronts will be highly visible to the masses of end users. Google Lens will apply machine vision to a user’s smartphone camera, analyzing and interpreting what it’s seeing. Beyond already existing augmented reality systems that can identify businesses and popular tourist destinations for customers, it’s difficult to imagine all the ways that Google’s AI technology will advance this concept; but Pichai provides the example of a user crawling “on a friend’s apartment floor to see a long, complicated Wi-Fi password on the back of a router” – a situation in which Google Lens will understand that the user is trying to get the password and automatically use the one it sees to log into the network, without the user having to do anything beyond directing the camera at the password.
Meanwhile, less consumer-facing applications of the technology could prove even more impactful. Google.ai is a new, open platform that will integrate all of Google’s AI systems to allow researchers and developers to leverage them broadly. This has already produced a related initiative called Google for Jobs, which will use AI to help connect job seekers with employers more efficiently; and a related program that will enable neural networks to create other neural networks, called AutoML, could in a few years allow developers to create machine learning systems without the high level of expertise that’s currently needed for such efforts.
It’s too early to predict the tangible, real-world effects of all of these efforts, but what is clear is that Google is banking on AI as the next big front in an increasingly connected world.