Google Lens uses AI to identify objects around you, like the Google Goggles of old
Google Goggles, in case you’re unfamiliar, is an app that uses Google’s search technology to identify objects you take pictures of with some degree of accuracy. It stagnated over the last three years with no updates and looked like another one of Google’s legendary abandonware products. Fast forward to I/O 2017 and Google announced the “successor” to Goggles: Google Lens, a platform coming soon to Google Assistant and Google Photos.
Google Lens uses the latest in Google’s machine learning technology to not only identify objects in your pictures, but it can suggest corresponding actions to take. For example, if you use Google Assistant to take a picture of a concert venue billboard, it can identify the act/event being featured and show you actions you can take (such as reading more about the performing act, getting tickets and adding the featured event to your calendar).
Google Lens promises to do the same with pictures already existing in your Google Photos library. For example, it might be able to identify the breed of cat in that kitten picture you downloaded last week.
Another capability Google Lens has, which Goggles shared, is the ability to recognize text and translate if necessary. As demonstrated during the keynote, a Google employee took a picture of a sign outside a Japanese restaurant and it translated the food item being featured and the price, and it was even able to show pictures of the menu item from Google Search. You can read more about it at the link below.