Google’s mission statement has always been “to organize the world’s information and make it universally accessible and useful” – an undoubtedly worthwhile mission, and one that by any metric they’ve arguably already accomplished. The verb “to google” has been synonymous with seeking information since even before it was officially added to the Webster Dictionary in 2006. But could Google make the world’s information even more accessible and even more useful? I believe that with the advances in AI that Google is working on, the ways in which we access and use this information will soon change dramatically.
As I mentioned in my presentation last week, Google’s users are constantly generating new data for Google to train their AI with. The almost six billion searches performed a day help to teach the AI how to find the best answers to every question we as humans can come up with. Every time you perform a search and then select what you deem to be the best result, you are helping to reinforce Google’s understanding of the world’s information. But this crowd-sourcing of training data doesn’t just apply to search. The two billion monthly users of the Android operating system are teaching Google’s AI how we all use our phones. The newest version of Android can now predict the next action a user will take on their phone with a 60% success rate (and this will only keep improving). Additionally, the 35 billion emails that Google processes a day are helping the AI learn how we communicate with each other in more formal or work related situations (see Gmail’s new smart compose below). Collectively, all of this data is improving Google’s overall understanding of how people use their more than 250 different products. Eventually, Google Assistant will be able to tap into whichever one of these products you need at that moment and then serve up only the relevant content you’re looking for, without you needing to sort through all of Google’s different services yourself.
At their annual developers conference this summer, Google announced new functionality for their Google Assistant along with several other advances in AI technology that all point to an exciting future powered by Google’s AI Assistant. By far the coolest of these announcements was Google Duplex which will be added to Google Assistant in the near future. If you still haven’t seen the demo of Duplex, go watch it here:
If after that you still don’t believe it’s real (I didn’t at first), you can find more sample phone calls here:
Duplex enables the Google Assistant to now make phone calls for you. This initial version will be able to: make reservations at restaurants, set up hair salon appointments, and call to find out store holiday hours. While this first version is limited in what it can do, there are many potential upgrades for Duplex that could prove very lucrative for Google. These include selling the tech to businesses to either replace current infuriating automated voice messaging systems or replace entire human call centers.
Another upgrade to Google Assistant is Gmail’s new Smart Compose. Smart Compose uses AI to provide auto-complete suggestions for your sentences while typing an email. Though not yet at the point of writing entire emails for you, it’s clear that’s where Google would like to ultimately take this. Paired with Google’s messaging app Allo, Google Assistant is beginning to add quite a bit of value on top of traditional messaging services.
Google also unveiled two new products involving computer vision which when taken together with Google Assistant’s voice commands, could be the beginning of a second attempt at Google Glass somewhere soon down the line.
The first is a new walking navigation feature for Google Maps that displays turn-by-turn navigation directions in an augmented reality overlay of your camera. This combines computer vision with Google Maps’ already powerful knowledge of the world around you to place directions right in the streets you’re walking on, show signs for places you’re passing by, and even give you fun little walking guides.
In the same realm of computer vision, Google announced their new Google Lens product which will run directly in the Android camera app. Lens is Google’s first real attempt at visual search, an innovative new way to find information that will, in many cases, replace Google’s traditional text search. Lens currently has three features which can all perform different types of searches better than Google’s traditional search. The first is smart text selection; Lens can recognize written words in your environment, allowing you to do quick searches for unfamiliar words, translate foreign words, or copy and paste text for later all from right in your camera app. The second is style search; point your camera at any piece of clothing or furniture and Google will show you shopping results for that item along with any similar-looking items. The third and most disruptive is real-time visual search, hold your camera up anywhere and Google will identify all of the objects on the screen and provide search results for whichever ones you choose.
All of these announcements represent huge advancements in the field of AI. They serve as clear evidence of Google’s current dominance in the space but are nowhere near what Google will ultimately be capable of. As more and more of Google’s products become closer integrated together, and Google Assistant empowers better access to all of them, Google’s offerings will eventually transform into a single product: a *mostly* generalized AI assistant capable of helping you with almost any aspect of your life.