We ranked the AI ​​features announced at Google I/O from most useful to most annoying.

Google News

Kelly Wang/ZDNET

At the annual developer event, Google I/O, Google announced many new AI products, features, and upgrades. Some? As CEO Sundar Pichai shamelessly admitted by the end of the keynote, AI was mentioned 120 times in the two-hour keynote. Some of these new products offer AI solutions to common problems, and while impressive, they don’t add much value to our daily lives. At least not mine.

Also: The 9 biggest announcements at Google I/O 2024: Gemini, Search, Project Astra, and more

To help you sort through all your announcements and identify the announcements that can have a positive impact on your daily life, I’ve included the AI ​​features that impressed me the most and the ones that are most likely to optimize your daily life. We have ranked them from lowest to lowest.

Google News 1. Ask questions to photos

This feature was mentioned very briefly during the keynote, so you might have missed it. However, Ask Photos could benefit most people by bringing its Gemini chatbot to Google Photos that allows users to categorize their photos.

Also: This subtle (but useful) AI feature was my favorite of the Google I/O 2024 announcements

The Ask Photos feature allows users to describe what photos or content they would like to find in an album. Google Photos finds photos from your camera roll and can even package multiple photos together if you want, as seen in the demo below.

At the I/O stage, Google CEO Sundar Pichai gave two examples of how this feature could be useful. In the first example, the user asked, “What is my license plate number?” Gemini then used the context to determine which car belonged to the user and pulled the number. In the second, a user who wanted to see photos of her daughter’s progress as a swimmer over time had Gemini automatically package the highlights with just a request.

Because of the amount of photos we take and store every day, this kind of assistance in categorizing, organizing, and packaging content is extremely helpful. Google shared that this feature is coming to Google Photos later this summer, and hinted that more features are coming.

Google News 2. Gmail Q&A function

This feature was also only briefly explained near the end of the keynote, so it’s easy to miss it. However, real problems are solved. During the Google Workspace portion of the keynote, the company announced three new features coming to mobile Gmail, including Gmail Q&A.

As the name suggests, the Gmail Q&A feature allows users to chat with Gemini about the context of their emails within the Gmail mobile app, allowing them to ask specific questions about their inbox.

Also: 5 exciting Android features Google announced at I/O 2024

For example, in an example presented at the Google I/O stage, a user asked Gemini to compare roof repair contractor bids by price and availability. As shown in the image below, Gemini was able to retrieve information from several inboxes and display it to the user.

Google

Because of my job (and shopping habits), my inbox is flooded with emails every day. A tool that can conversationally answer questions about multiple inboxes on your phone is a game-changer, taking the assistance provided by email AI summarizers to the next level.This feature will be released in Google Lab In late July, users

Google News 3. Project Astra/Gemini Live

One of the most memorable moments from the keynote was when Google Deepmind played a video for Project Astra, which can use the user’s camera to assist with visual prompts, as seen in the video below. An AI voice assistant was shown.

We share Project Astra. Our new project focuses on building future AI assistants that are truly useful in everyday life. 🤝
See it in action in two parts. Each part was captured in real time in his single take. ↓ #GoogleIO pic.twitter.com/x40OOVODdv

— Google DeepMind (@GoogleDeepMind) May 14, 2024

Project Astra is a Google DeepMind project that aims to reimagine the future of AI assistants by making voice assistants aware of the user’s environment. The project is built into Gemini Live, a mobile experience that allows users to converse with Gemini, including its surroundings.

Also: I demoed Google’s Project Astra and it felt like the future of generative AI (until it wasn’t)

The Gemini Live experience also allows users to choose from a variety of natural voices and pause mid-conversation, making these interactions more natural and intuitive.

Users can’t take advantage of Gemini Live’s full multimodal experience yet, but the technology could transform the voice assistant experience as Google adds the full experience later this year. This leads to the next point.

Google News 4. Google Assistant: Demoted, but not dead

During the event, Google cleverly said that Gemini could soon replace Google Assistant as the default AI assistant. android mobile phone. Despite Google’s subtle mention, this is a very big deal because it affects Android customers beyond the Pixel user base and how they interact with voice assistants.

Kelly Wang/ZDNET

This change is also important because Gemini is capable of advanced language processing, improving the quality of support. Gemini’s plans look promising. Google has shared that AI will eventually be overlaid on a variety of services and apps, providing multimodal on-screen support on request.

Google News 5. Upgrade from Gemini Advanced to Gemini 1.5 Pro

Google first launched Gemini Advanced, Gemini’s premium subscription tier, in February, giving users access to Google’s latest AI models and long-form conversations. At Google I/O, the company expanded its offering even further, and one of the biggest upgrades was access to Gemini 1.5 Pro.

Gemini 1.5 Pro generally provides a 1 million token context window. To put this number in perspective, as Pichai said on stage, users can now upload up to 1,500 pages of documents, 100 emails, or his Cheesecake Factory menu of 96 varieties. . Google claims this is the largest context window of any widely available consumer chatbot.

Also:What does a long context window mean for an AI model?

I don’t think the average user needs this kind of window, but for superusers who need help working with large amounts of data, this added context window is a game-changer. Interested users can access Gemini Advanced below.google oneThe AI ​​Premium plan costs $20 per month after the trial period ends.

Google News 6. Veo and Imagen 3

At Google I/O, Google announced a cutting-edge AI text image generator. Image 3, text-to-video generator, Veo. Both have been significantly upgraded from previous versions to improve output quality and increase fidelity to user prompts. The model is previewed with selected creators. To access any of these models, interested users can waiting list.

Introducing Veo: our most capable generative video model. 🎥
Create high-quality 1080p clips longer than 60 seconds.
You can work on a variety of film styles, from photorealism to surrealism to animation. 🧵 #GoogleIO pic.twitter.com/6zEuYRAHpH

— Google DeepMind (@GoogleDeepMind) May 14, 2024

Even though both models look very promising and drive AI image and video generation, they are ranked lower on the list because they are less relevant to people’s daily lives and workflows. Because it doesn’t seem to add value. Creative professionals who produce videos and images every day. For non-creative people, it’s a great tool to have in your pocket when the opportunity arises.

Google News 7. Overview of AI in Google Search

Last but not least is Google Search’s AI Overview feature. We put the AI ​​overview at the end of the list because while some may find the AI-generated insights at the top of search results helpful, the broader deployment means that all US English-based Because there wasn’t really a need to push it to search users. You’ll be solving a problem that never existed in the first place.

Also: The four biggest features of Google Search announced at Google I/O 2024

The system Google offered before you didn’t have to opt-in to Search Generative Experience (SGE) to access AI Overview seemed more convenient because you could easily access it if you needed it, but if you needed a search experience There was no need to do so in that case. To remain the same.

artificial intelligence

Source of this program
“I love plug-ins because they’re fun!”
“AI was mentioned 120 times in the keynote, but which features can actually improve your workflow? These 7 stood out to me the most…”
Source: Read more
Source link: https://www.zdnet.com/article/i-ranked-the-ai-features-announced-at-google-io-from-most-useful-to- gimmicky/#ftag=RSSbaffb68

Author: BLOGGER