Close Menu
New York Daily News Online
    Facebook X (Twitter) Instagram Pinterest YouTube
    Facebook X (Twitter) Instagram YouTube TikTok
    New York Daily News OnlineNew York Daily News Online
    • Home
    • US News
    • Politics
    • Business
    • Technology
    • Science
    • Books
    • Film
    • Music
    • Television
    • LifeStyle
    • Contact
      • About
      • Amazon Disclaimer
      • DMCA / Copyrights Disclaimer
      • Privacy Policy
      • Terms and Conditions
    New York Daily News Online
    Home»Science

    AI hallucinations are getting worse – and they’re here to stay

    AdminBy AdminMay 12, 2025 Science
    Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit
    AI hallucinations are getting worse – and they’re here to stay

    AI hallucinations are getting worse – and they’re here to stay

    Errors tend to crop up in AI-generated content

    Paul Taylor/Getty Images

    AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as “hallucinations”, have been a problem from the start, and it is becoming clear we may never get rid of them.

    Hallucination is a blanket term for certain kinds of mistakes made by the large language models (LLMs) that power systems like OpenAI’s ChatGPT or Google’s Gemini. It is best known as a description of the way they sometimes present false information as true. But it can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.

    An OpenAI technical report evaluating its latest LLMs showed that its o3 and o4-mini models, which were released in April, had significantly higher hallucination rates than the company’s previous o1 model that came out in late 2024. For example, when summarising publicly available facts about people, o3 hallucinated 33 per cent of the time while o4-mini did so 48 per cent of the time. In comparison, o1 had a hallucination rate of 16 per cent.

    The problem isn’t limited to OpenAI. One popular leaderboard from the company Vectara that assesses hallucination rates indicates some “reasoning” models – including the DeepSeek-R1 model from developer DeepSeek – saw double-digit rises in hallucination rates compared with previous models from their developers. This type of model goes through multiple steps to demonstrate a line of reasoning before responding.

    OpenAI says the reasoning process isn’t to blame. “Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” says an OpenAI spokesperson. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

    Some potential applications for LLMs could be derailed by hallucination. A model that consistently states falsehoods and requires fact-checking won’t be a helpful research assistant; a paralegal-bot that cites imaginary cases will get lawyers into trouble; a customer service agent that claims outdated policies are still active will create headaches for the company.

    However, AI companies initially claimed that this problem would clear up over time. Indeed, after they were first launched, models tended to hallucinate less with each update. But the high hallucination rates of recent versions are complicating that narrative – whether or not reasoning is at fault.

    Vectara’s leaderboard ranks models based on their factual consistency in summarising documents they are given. This showed that “hallucination rates are almost the same for reasoning versus non-reasoning models”, at least for systems from OpenAI and Google, says Forrest Sheng Bao at Vectara. Google didn’t provide additional comment. For the leaderboard’s purposes, the specific hallucination rate numbers are less important than the overall ranking of each model, says Bao.

    But this ranking may not be the best way to compare AI models.

    For one thing, it conflates different types of hallucinations. The Vectara team pointed out that, although the DeepSeek-R1 model hallucinated 14.3 per cent of the time, most of these were “benign”: answers that are factually supported by logical reasoning or world knowledge, but not actually present in the original text the bot was asked to summarise. DeepSeek didn’t provide additional comment.

    Another problem with this kind of ranking is that testing based on text summarisation “says nothing about the rate of incorrect outputs when [LLMs] are used for other tasks”, says Emily Bender at the University of Washington. She says the leaderboard results may not be the best way to judge this technology because LLMs aren’t designed specifically to summarise texts.

    These models work by repeatedly answering the question of “what is a likely next word” to formulate answers to prompts, and so they aren’t processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term “hallucinations” when describing output errors.

    “‘Hallucination’ as a term is doubly problematic,” says Bender. “On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines – hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.”

    Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

    The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

    Topics:

    Read the original article here

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit

    you might also be interested in...

    Worm slime could inspire recyclable polymer design – Physics World

    Why John Stewart Bell has been haunting quantum mechanics for decades

    Dawn Aerospace sells Aurora suborbital spaceplane to Oklahoma

    why Helgoland is a great spot for fundamental thinking – Physics World

    Meta’s AI memorised books verbatim – that could cost it billions

    Whitesides says budget proposal shows the administration does not value NASA science

    Popular Posts

    8 Epic BIPOC Crime Novels

    12 Of The Best Hair Putty Options to Get You Through 2025

    Israel attacks Iran, kills armed forces chief Bagheri

    Trump CFPB cuts reviewed by Fed inspector general

    Why Were Those Family Dinners the Worst to Film?

    Solid if Basic Hulu Doc

    Categories
    • Books (1,376)
    • Business (1,877)
    • Events (16)
    • Film (824)
    • LifeStyle (1,831)
    • Music (1,681)
    • Politics (1,230)
    • Science (1,672)
    • Technology (1,616)
    • Television (1,738)
    • Uncategorized (33)
    • US News (1,729)
    Archives
    Useful Links
    • Contact
    • About
    • Amazon Disclaimer
    • DMCA / Copyrights Disclaimer
    • Privacy Policy
    • Terms and Conditions
    Facebook X (Twitter) Instagram YouTube TikTok
    © 2025 New York Daily News Online. All rights reserved. All articles, images, product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Terms of Use and Privacy Policy.

    Type above and press Enter to search. Press Esc to cancel.