If any AI becomes “inappropriate”, the system will hide it for only as long as it needs to do harm – its control is a delusion.

(Image credit: Hernan Schmidt/Alamy Stock Photo) In late 2022, broad language models of artificial intelligence were released to the public, and within months they began to exhibit unacceptable behavior. A notable example is Microsoft’s “Sydney” chatbot, which threatened to kill…











