Skip to content

Google Enhances Lens Multisearch with AI-Powered Overviews

Revolutionize your visual search experience! Google introduces AI-powered multisearch to Google Lens, offering insightful answers to questions about images. Available for all U.S. users in English.

Google Lens Unleashes AI-Powered Multisearch: Transforming Visual Queries into Insightful Answers!

In a groundbreaking move, Google announces a significant upgrade to Google Lens, introducing AI-powered multisearch capabilities. Users can now point their camera or upload a photo to Lens, asking questions about what they see and receiving generative AI-driven answers. This innovation, an extension of Lens' multisearch capabilities, provides insights instead of just visual matches.

For instance, a photo of a plant accompanied by the question "When do I water this?" won't merely display other plant images but will identify the plant and offer watering guidelines, such as "every two weeks." The feature relies on web information, pulling insights from websites, product pages, and videos.

This AI-powered enhancement is also integrated with Google's new search gestures, known as Circle to Search. Users can initiate generative AI queries with a gesture, asking questions about the circled item. However, it's important to note that this multisearch feature in Lens is distinct from Google's experimental genAI search SGE, which remains opt-in only.

Launching for everyone in the U.S. in English, this feature is not confined to Google Labs. To access it, tap on the Lens camera icon in the Google search app for iOS or Android, or in the search box on your Android phone. Google aims to maintain search relevance in the AI era with this innovative addition, offering a new dimension to visual search and transforming the way users interact with images. Embrace the future of search with Google Lens!