(Image credit: Getty Images/peshkov)
A recent study has shown that large language models (LLMs) become less “smart” with each new version, as they oversimplify and in some cases distort key scientific and medical discoveries.
The scientists found that versions of ChatGPT, Llama, and DeepSeek were five times more likely to simplify scientific results than human experts after analyzing 4,900 research paper abstracts.
When asked for precision, chatbots were twice as likely to overgeneralize results than when asked to provide a simple summary. Testing also showed that new versions of chatbots exhibited more overgeneralizations than previous generations.
You may like
-
The more advanced the AI becomes, the more often it hallucinates. Is there a way to stop this?
-
'Meth is what allows you to do your job': Study finds AI could push you to relapse if you're struggling with addiction
Sourse: www.livescience.com