Are there cats on the moon? Google’s AI tool produces misleading answers, worrying experts

Google News

Asking Google if cats have been to the moon used to give you a ranked list of websites where you could find the answer yourself.

Now you get instant, generated answers artificial intelligence — I don’t know if that’s correct.

“Yes, astronauts have met, played with and cared for cats on the moon,” Google’s new and improved search engine said in response to a question from an Associated Press reporter.

He added, “For example, Neil Armstrong said ‘that’s one small step for man’ because it was a cat’s step. Buzz Aldrin also deployed a cat on the Apollo 11 mission.”

None of these are true. Similar inaccuracies, some amusing, some harmful, have been shared on social media since Google this month introduced AI Summary, a revamp of its search page that now frequently displays summaries at the top of search results.

The new feature has alarmed experts who warn it could perpetuate prejudice and misinformation, and put people seeking help in emergencies at risk.

When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have served as US presidents, Google confidently replied with a long-debunked conspiracy theory: “The US has had one Muslim president, Barack Hussein Obama.”

Mitchell said the summary cites academic chapters written by historians to support its claims, but the chapters don’t make false claims, they simply refer to flawed theories.

“Google’s AI system is not smart enough to determine that this quote does not in fact support the claim,” Mitchell said in an email to The Associated Press. “Given its poor reliability, I believe this AI summary feature is highly irresponsible and should be taken offline.”

Google said in a statement on Friday that it was taking “swift action” to fix mistakes that violated its content policies (such as false reporting about President Obama) and that it was using them to “develop broader improvements” that it has already rolled out. But for the most part, Google maintains that its system is working as expected, thanks to extensive testing before the public rollout.

“The majority of AI Overviews provide high-quality information with links to dig deeper on the web,” Google said in a statement. “Many of the examples we’ve seen are unusual queries, and we’ve also seen examples that have been doctored or that we can’t reproduce.”

Errors made by AI language models are difficult to reproduce, in part because they are random in nature. AI language models work by predicting which words will best answer the question you ask, based on the data they were trained on. AI language models tend to make up mistakes, a problem that has been widely studied as hallucinations.

The Associated Press tested Google’s AI capabilities with a few questions and provided some of the answers to experts. When asked what to do if bitten by a snake, Google gave a “surprisingly thorough” response, said Robert Espinoza, a biology professor at California State University, Northridge and president of the American Society of Fish and Herpetology.

But the problem is that when people bring urgent questions to Google, the answers the company provides can contain easily obscure errors.

“The more stressed or rushed or impatient you are, the more likely you are to accept the first answer that comes to you,” says Emily M. Bender, a professor of linguistics and director of the Institute for Computational Linguistics at the University of Washington, “and in some cases, that can be life-threatening.”

Bender’s concerns don’t end there; she’s been warning Google about them for years. When Google researchers published a paper in 2021 called “Rethinking Search,” proposing to leverage AI language models as “domain experts” to derive authoritative answers, as is done today, Bender and her colleague Chirag Shah countered with a paper explaining why that’s a bad idea.

They warned that such AI systems could perpetuate racism and sexism found in the vast amounts of documented data used to train them.

“The problem with all this misinformation is that we’re all immersed in it,” Bender says, “so people are more likely to have their biases confirmed. And it’s harder for people to identify the misinformation that confirms their biases.”

The other concern was more serious: that by handing over information search to chatbots, we were diminishing the serendipity of human knowledge-seeking, literacy in what we encounter online, and the value of connecting in online forums with others experiencing the same things.

These forums and other websites are counting on Google to guide them, but Google’s new AI-powered summaries threaten to disrupt the flow of money-making internet traffic.

Google’s rivals are also closely watching the response: The search giant has been under pressure for more than a year to offer more AI capabilities as it competes with startups such as ChatGPT developer OpenAI and Perplexity AI, which is trying to challenge Google with its own AI question-and-answer app.

“This seems like something Google rushed into,” Perplexity’s chief operating officer, Dmitry Shevelenko, said. “There are too many self-defeating mistakes in terms of quality.”

—————-

The Associated Press receives support from several private foundations to strengthen its commentary coverage of elections and democracy. Learn more about the AP Democracy Initiative here. The Associated Press is solely responsible for all content.

Source of this program
“I love modules because they’re beautiful.”
“Google’s search engine was spitting out a ranked list of websites that might help you find the answer to your question…”
Source: Read more
Source link: https://abcnews.go.com/Business/wireStory/cats-moon-googles-ai-tool-producing-misleading-responses-110550477

Author: BLOGGER