You can trick Google's AI Overviews into explaining made-up idioms

Engadget - Apr 23rd, 2025
Open on Engadget

Google's AI Overview, powered by its Gemini platform, recently demonstrated a significant flaw in its algorithm by providing plausible explanations for fictional, nonsensical idioms. Users found they could easily trick the AI into offering interpretations for phrases like "You can't lick a badger twice" and "You can't golf without a fish," treating them as established idioms. These explanations, although logically constructed, were based on nonexistent sayings, revealing the AI's tendency to hallucinate information if not fact-checked by users.

This incident highlights the broader issue of AI-generated misinformation, which can lead to real-world consequences if users rely on unverified outputs. Similar problems have been observed with other AI platforms, such as when ChatGPT generated nonexistent legal cases, resulting in professional repercussions for the lawyers involved. The persistence of AI hallucinations underscores the need for critical evaluation and fact-checking by users to prevent the spread of false information and to understand the current limitations of artificial intelligence technologies.

Story submitted by Fairstory

RATING

6.8
Fair Story
Consider it well-founded

The article effectively highlights the issue of AI hallucinations, using humorous and relatable examples to illustrate the potential pitfalls of relying on AI-generated content. It is timely and relevant, addressing a topic of significant public interest as AI technology becomes more integrated into daily life. The story is clear and engaging, with a structure that facilitates comprehension. However, it could benefit from greater transparency regarding the methodology behind the examples and a more balanced perspective by acknowledging advancements in AI technology. Additionally, the inclusion of expert insights or solutions to mitigate AI risks could enhance its impact and engagement. Overall, the article provides a valuable overview of AI hallucinations but could delve deeper into the technical and ethical dimensions of the issue for a more comprehensive analysis.

RATING DETAILS

8
Accuracy

The story accurately captures the issue of AI hallucinations, specifically how Google's AI Overview is tricked into explaining fictional idioms. The examples provided, such as "You can't lick a badger twice" and "You can't golf without a fish," are consistent with known issues of AI misinterpretation. The mention of the lawyers fined for using AI-generated non-existent cases is a factual event that underscores the potential for AI to mislead if unchecked. However, the article could benefit from more precision in explaining the technical reasons behind these hallucinations, such as the limitations of natural language processing in AI. Overall, the story is truthful and well-supported by the examples cited, but it could delve deeper into the technical aspects to enhance verifiability.

7
Balance

The article primarily focuses on the shortcomings of AI, particularly Google's AI Overview, without offering perspectives on the potential benefits or advancements in AI technology. While it effectively highlights the risks of AI hallucinations, it could present a more balanced view by acknowledging ongoing improvements in AI accuracy and the efforts being made to mitigate such issues. The story leans towards a critical perspective, which is important for awareness, but the inclusion of expert opinions on AI development would provide a more rounded view.

8
Clarity

The article is clear and engaging, using relatable examples to illustrate the concept of AI hallucinations. The language is accessible, and the tone is light-hearted yet informative, which helps convey the message effectively. However, while the examples are easy to understand, the article could improve clarity by providing more context on AI functionality and the specific challenges that lead to such hallucinations. The logical flow from examples to broader implications is well-structured, making the content easy to follow.

6
Source quality

The article cites specific examples and references a known incident involving lawyers misusing AI, which lends credibility to its claims. However, it lacks direct attribution to authoritative sources or experts in AI technology, which could strengthen its reliability. The mention of Engadget and Bluesky as sources adds some credibility, but the story would benefit from direct quotes or insights from AI researchers or industry professionals to enhance its authority and depth.

5
Transparency

The article does not provide detailed information on how the fictional idioms were tested or the methodology used to assess AI performance. While it mentions specific examples and a related legal case, it lacks transparency regarding the context in which these AI hallucinations occur. Providing more background on the AI's operational mechanisms, as well as potential biases in data processing, would improve transparency and help readers understand the basis for the claims made.

Sources

  1. https://www.engadget.com/ai/you-can-trick-googles-ai-overviews-into-explaining-made-up-idioms-162816472.html
  2. https://www.tomsguide.com/ai/google-is-hallucinating-idioms-these-are-the-five-most-hilarious-we-found
  3. https://aitopics.org/doc/news:369F97A1
  4. https://www.urban75.net/forums/threads/google-ai-attempts-to-define-imaginary-idioms.387474/
  5. https://aitopics.org/doc/news:369F97A1/iframe/page/__PAGE__