The processes used in complex thinking models produce significantly more emissions than their traditional counterparts. (Image courtesy of Getty Images)
The more accurately we try to develop AI models, the bigger their carbon footprint becomes—some cues produce 50 times more carbon dioxide than others, according to a new study.
Reasoning models like Anthropic's Claude, OpenAI's o3, and DeepSeek's R1 are specialized large language models (LLMs) that require more time and computational resources to provide more accurate answers than previous versions.
But despite some impressive advances, these models have been shown to face serious limitations in their ability to solve complex problems. Now, a team of scientists has identified another limitation to the models’ effectiveness: their excessive carbon footprint. They published their findings June 19 in the journal Frontiers in Communication.
You may like
-
Traumatizing AI models with talk of war or violence makes them more anxious
-
Scientists agree that existing models of artificial intelligence are a “dead end” for human-level intelligence
Sourse: www.livescience.com