Google Lens just got a whole lot smarter! Forget just snapping pictures and hoping for the best – now you can talk to your images and refine your search results with voice context. It’s like having a personal AI assistant that understands what you see and hear.
The “Speak now to ask about this image” feature lets you press and hold the search button in Lens to record a video and add audio context to your search. The search results will then take into account both the visual and auditory information, delivering more precise and relevant results.
Think of it this way: you spot a cool plant but can’t identify it. Instead of relying on Lens’s image recognition alone, you can now say, “What kind of plant is this with the red flowers and fuzzy leaves?” Bam! Instantaneously, you’ll get targeted search results based on both the image and your description.
This feature, first spotted in development back in June, is already rolling out to Android users worldwide. It’s a testament to Google’s commitment to innovation and making search more intuitive and conversational.
In recent times, Google has been actively exploring ways to enhance Android searches with additional context. The improved Multisearch capability in Google Lens is just the latest in a series of enhancements aimed at making your search experience more effective and personalized.
So, the next time you encounter something you’re curious about, don’t just point your phone at it – talk to it! Google Lens is now ready to listen and deliver the answers you seek.