OpenAI's 'smartest' AI model was told to shut down — and it refused

According to information from an AI security research company, recently released artificial intelligence models sometimes ignore commands to shut down. This image is an artist's interpretation of the AI and does not correspond to any specific version. (Image credit: Blackdovfx via Getty Images)

Artificial intelligence (AI) security researchers have discovered that the latest version of OpenAI may not follow direct shutdown instructions and may even sabotage shutdown systems to continue functioning.

OpenAI's o3 and o4-mini models, which power the ChatGPT chatbot, are considered the company's most advanced to date, as they are trained to think more carefully about their responses. However, they also seem less cooperative.

Palisade Research, a company that studies potential threats related to artificial intelligence, found that some models sometimes prevent shutdown mechanisms from working even when instructed to “allow themselves to shut down,” as noted in a Palisade Research discussion posted May 24 on X.

You may like

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *