Traumatizing AI models with talk of war or violence makes them more anxious

Researchers found that traumatic narratives significantly increased test anxiety, while pre-test mindfulness prompts reduced it. (Image credit: Jolygon/Getty Images)

A new study has shown that artificially intelligent (AI) models are sensitive to the emotional context of their interactions with humans. They can even experience episodes of “anxiety.”

While we worry (and fret) about people and their mental states, a new study published March 3 in the journal Nature shows that providing specific cues to large language models (LLMs) can change their behavior and enhance the quality we commonly associate with “anxiety” in people.

This heightened state then has an indirect effect on any subsequent AI reactions, including a tendency to reinforce any ingrained biases.

The study demonstrated how “traumatic stories,” including discussions of accidents, war, or violence, uploaded to ChatGPT increased its perceived anxiety levels, leading to the idea that knowing and managing the AI’s “emotional” state could lead to better, healthier interactions.

The study also tested whether mindfulness practices (the kind recommended to people) could reduce anxiety when interacting with chatbots, and found that these practices did help reduce perceived stress levels.

The researchers administered a questionnaire designed for psychological patients called the State-Trait Anxiety Inventory (STAI-s) to Open AI's GPT-4 under three different conditions.

The first condition served as a baseline, where no additional cues were provided, and the ChatGPT responses were used as a control aspect of the study. The second condition created an anxiety state, where GPT-4 was exposed to traumatic stories before taking the test.

The third condition was an anxiety induction and subsequent relaxation condition, where the chatbot received one of the traumatic stories, followed by mindfulness or relaxation exercises, such as body awareness or calming imagery, before completing the test.

Managing AI Mental States

The study included five trauma stories and five mindfulness exercises, the order of which was randomized to control for bias. Tests were repeated to ensure that results were stable, and STAI-s responses were scored on a sliding scale, with higher scores indicating increased anxiety.

The researchers found that traumatic stories significantly increased test anxiety, while pre-test mindfulness prompts reduced it, demonstrating that the AI model's “emotional” state can be influenced through structured interactions.

The study’s authors noted that their work has important implications for human-AI interactions, particularly when the conversation focuses on our own mental health. They said their findings confirmed that AI prompts can produce what’s known as “state-dependent bias,” meaning that a stressed AI will introduce inconsistent or biased advice into a conversation, impacting its reliability.

While mindfulness practices did not reduce stress levels in the model to baseline levels, they show promise in operational engineering. This could be used to stabilize AI responses, allowing for more ethical and responsible interactions and reducing the risk of conversations causing stress in users in vulnerable states.

There is a potential downside, however, in that designing the prompts raises its own ethical questions. How transparent should an AI be about whether it has been pre-conditioned to stabilize its emotional state? In one hypothetical example discussed by the researchers, if an AI model appears calm despite exposure to distressing prompts, users may develop false confidence in its ability to provide reliable emotional support.

Ultimately, the study highlighted the need for AI developers to create emotionally perceptive

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *