Google has announced the deployment of a new feature in its proprietary application Google Lens. This innovation will allow users to search simultaneously through the text and the image.
The function complements the usual image search with accompanying text. For example, if the user is looking for clothes like the model in the picture, but of a different colour, then in this case it is enough to upload a picture and write the desired colour or another characteristic. You can also ask a question about the object that is in front of the camera.
The tool was announced last year, and now it has moved into the stage of open beta testing. To use the service, you need to open Google Lens, make a photo subject or download an image from the device’s memory, and then add a clarification. At the moment, multi-search is available for gadgets on iOS and Android with the latest version of the Google Lens application in the United States.
“At Google, we’re always coming up with new ways to help you uncover the information you’re looking for — no matter how difficult it is to express what you need,” the company said in a blog post.
The company said it is exploring a way to improve the feature with MUM’s own ai model, which simultaneously understands and processes information from text, images and videos.