As part of its efforts to improve what its search engine can do, Google has announced new AI-powered updates to Google Search. According to an announcement by Vice President of Search, Elizabeth Reid, the company has introduced two major updates that will make it easier for users to get answers to queries: the Circle to Search and an AI-powered multisearch experience.
According to her, this is part of the company’s approach to exploiting generative Artificial Intelligence (AI)’s ability to understand natural language to make it possible to ask questions on Google Search in more natural and intuitive ways. Previous outcomes include previous announcements that users can search with voice and search with their camera using Lens.
Here is a closer look at what these entail:
Circle (or highlight or scribble) to Search
When something grabs your interest (like these adorable dog goggles!), it can be disruptive to stop what you’re doing and use another app or browser to start searching for information.
Circle to Search is a new way to search for anything on your Android phone screen without having to switch apps. All you have to do is select images, text or videos in whatever way comes naturally to you — like circling, highlighting, scribbling or tapping — and find the information you need right where you are.
So, now, whether you’re texting friends, browsing social media or watching a video, you can search for what is on your screen right when your curiosity strikes. As with other Google Search options, ads will appear in dedicated ad slots throughout the results page.
Circle to Search is launching globally on select premium Android smartphones on January 31, starting with the Pixel 8, the Pixel 8 Pro and the new Galaxy S24 Series.
Point your camera, ask a question, get help from AI
How many times have you tried to find the perfect piece of clothing, a tutorial to recreate nail art or even instructions on how to take care of a plant someone gifted you — but you didn’t have all the words to describe what you were looking for?
11 months ago, Google unveiled multisearch in Lens as a new way to search multimodally, with both images and text. With Google multisearch, users can ask questions about an object in front of them by taking a picture or even refine their search by colour, brand or other additional attributes. The feature is powered by the latest in computer vision and language understanding techniques.
Now, with recent breakthroughs in generative AI, Google is making exploring the world with multisearch easier.
Starting today, when you point your camera (or upload a photo or screenshot) and ask a question using the Google app, the new multisearch experience will show results with AI-powered insights that go beyond just visual matches. This gives you the ability to ask more complex or nuanced questions about what you see, and quickly find and understand key information.
For example, imagine you’re at a yard sale and you come across a strange-looking board game. There’s no box or instructions, so immediately some questions spring to mind: What is this game and how is it played?
Here is how to use the new Multisearch feature:
Just take a picture of the game, add your question (“How do you play this?”), and you’ll get an AI-powered overview that brings together the most relevant information from across the web. This way, you can quickly find out what the game is called and how to win. And with the AI-powered overview, it’s easy to dig deeper with supporting links and get all the details.
AI-powered overviews on multisearch results are launching this week in English in the U.S. for everyone — no enrollment in Search Labs is required. If you’re outside the U.S. and opted into SGE, you can preview this new experience on the Google app.
To get started, just look for the Lens camera icon in the Google app for Android or iOS.
Continue to boldly experiment with generative AI in Search
Reid explained that this week’s launch of AI-powered insights for multisearch is the result of testing it began last year to see how gen AI can make Search radically more helpful. Recall that Google, two months ago, rolled out its Search Generative Experience (SGE) to users in the Sub-Sahara Africa region as an opt-in experiment in Search Labs.
With SGE, users in Africa will now see an AI-powered overview of key information to consider at the top of results to a search query on Google, with links to dig deeper. For anyone who has ever been overwhelmed by the amount of information online, this will help find answers more quickly.
An AI-powered snapshot provides key information to consider and links to dig deeper.
Also, context will be carried over from question to question, to help users more naturally continue their exploration. Right under the snapshot, you’ll see the option to ‘ask a follow-up’ question or select a suggested next step.
The recent announcements show that Google is on a drive to make AI helpful for everyone, not just early adopters. Reid expressed the company’s commitment to continue to experiment and uncover which applications of gen AI are most helpful as well as continue to introduce them into Search more broadly.
“Today’s updates will make Search even more natural and intuitive, but we’ve only just scratched the surface of what’s possible. We have gotten lots of useful feedback from people who have chosen to join this experiment, and we’ll continue to offer SGE in Labs as a testbed for bold new ideas”, she added.
The AI race in 2024 is starting on an exciting note.
Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!