(Image credit: wildpixel/Getty Images)
Scientists involved in developing some of the world's most advanced artificial intelligence (AI) systems are warning that the technologies they have created could threaten humanity.
Researchers from companies like Google DeepMind, OpenAI, Meta, Anthropic and others believe that a lack of control over AI's thinking and decision-making processes could lead us to not notice dangerous behavior.
In a new study published July 15 on the preprint server arXiv (which has not been peer-reviewed), the authors analyze chains of reasoning (CoTs) — the steps that large language models (LLMs) take to solve complex problems. AI uses CoTs to break down complex queries into intermediate logical steps expressed in natural language.
You may like
-
The study found that advanced AI models from OpenAI and DeepSeek “fail utterly” when tasks become too complex.
-
If you threaten an AI chatbot, it could lie, manipulate and 'let you die' to stop you, study warns
Sourse: www.livescience.com