Google will soon start using GenAI to organize some search results pages

Comment
At the Google I/O 2024 developer conference on Tuesday, Google announced that it plans to use generative AI to organize the entire search results page for some search results. That’s in addition to the existing AI Overview feature, which creates a short snippet with aggregate information about a topic you were searching for. The AI Overview feature becomes generally available Tuesday, after a stint in Google’s AI Labs program.
A search results page using generative AI for its ranking mechanism will have wide-reaching consequences for online publishers. “We don’t think AI overviews is all of what there is,” Elizabeth Harmon Reid, the head of Google Search, said in a press briefing ahead of the announcement. “There’s opportunities to use generative AI to infuse throughout search, and one of the areas where I’m personally really excited about is building an AI-organized results page.”
For now, Google plans to show these new search results pages when it detects that a user is looking for inspiration. In Google’s example, that’s visiting Dallas for an anniversary trip. Soon, it will also show these results when users look for dining options and recipes, with results for movies, books, hotels, shopping and more coming soon.
“We are going to use generative AI to actually organize the whole results page, to think about the topic and understand what’s interesting, recognizing that you might want recommendations. Rooftop patios are great in Dallas, because of the season. It’s also known for historic elegance and so you can really dig in to get something that’s really inspiring for you and Google can do the brainstorming with you on this.”
In the example Reid showed, the results page featured lists of “anniversary-worthy restaurants,” organized in a carousel with the usual star ratings but also short, GenAI-generated summaries of reviews. That list was augmented with discussions from Reddit (what else), and AI-generated lists of places to see live music in an intimate setting, romantic steakhouses and critic picks. At the bottom of that page, there is also an option to see “more web results” for what we can only assume is a more traditional search experience.
As of now, it’s not clear where Google will place its ads on these pages.
In the pre-I/O press briefing, a reporter asked CEO Sundar Pichai if the traditional Google Search would survive Gemini. Pichai, unsurprisingly, didn’t really answer this question and instead argued that Google wants to stay focused on the user.
“You meet them as their needs evolve,” he said. “Overall, when we do that, people respond, people engage with the product more. So across search and Gemini, I’m excited we can expand the kind of use cases we can help users with. I already see it. You see examples of the kind of complex questions we can solve, how we can help them more along their journey, how we can integrate them with our products, and help them more deeply. And so I view all that as a net positive. And to me, it feels like this is a moment of growth and opportunity, not the other way about — and so we are pretty excited about what’s happening.”
Maybe SEO is dead after all.
We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.
Read more about Google I/O 2024 on TechCrunch
Every weekday and Sunday, you can get the best of TechCrunch’s coverage.
Startups are the core of TechCrunch, so get our best coverage delivered weekly.
The latest Fintech news and analysis, delivered every Sunday.
TechCrunch Mobility is your destination for transportation news and insight.
By submitting your email, you agree to our Terms and Privacy Notice.
Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.
As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…
Ilya Sutskever, OpenAI’s longtime chief scientist and one of its co-founders, has left the company. OpenAI CEO Sam Altman announced the news in a post on X Tuesday evening. pic.twitter.com/qyPMIcvcsY…
Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…
This will enable developers to use the on-device model to power their own AI features.
It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…
Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.
In the coming months, Google says it will open up the Gemini Nano model to more developers.
As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.
Here are quick hits of the biggest news from the keynote as they are announced.
LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.
The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 
Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…
The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.
Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.
Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.
Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.
In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.
The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.
Google says that over 100,000 developers already tried the service.
The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 
The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.
This is a great example of a company using generative AI to open its software to more users.
Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 
People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.
A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.
Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.
At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.
Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.
Veo can generate few-seconds-long 1080p video clips given a text prompt.
Facebook
Youtube
LinkedIn
X
Instagram
Mastodon
Powered by WordPress VIP

source

Leave a Comment

Vélemény, hozzászólás?

Az e-mail címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöltük