About a year and a half ago, we sat down with our in-house experts Gary Brotman and Kristin Wyman to discuss AI — what it is, how it differs from machine learning and deep learning, and the frameworks for deploying it. To say a lot’s changed since that podcast would be an understatement. Today, we’re catching up and digging a little deeper.
One of the biggest evolutions in AI — and a key step in making it ubiquitous — is the transitioning of intelligence to the edge. Smartphones equipped with our on-device AI technologies are engineered to deliver superior, more security-focused user experiences without interruption (just in case you lose your connection).
Proof point? Google Lens. The visual search product, which is “giving Google eyes,” is a great example of the powerful utility of on-device AI. While it’s simple to use, behind the scenes it’s anything but.
Here to talk about Google Lens, the computer vision that powers it, and the future of smartphones are two of our friends at Google: Director of Product Management for Google Lens Eddie Chung and Director of Product Management for Android AI and Camera Products. Brahim Elbouchikhi . Gary and Kristin are back in the studio with us, too.