AI with a ‘female’ tag tends to face increased exploitation.

“`html

(Image credit: Feodora Chiosea/Getty Images)ShareShare by:

  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Flipboard

Share this article 1Join the conversationFollow usAdd us as a preferred source on GoogleNewsletterSubscribe to our newsletter

Individuals are more prone to taking advantage of AI companions labeled as female compared to those labeled as male — indicating that discrimination based on gender extends beyond merely human interactions.

A new study, appearing in the iScience journal on Nov. 2, explored disparities in people’s inclination to collaborate when their human or AI partners were assigned feminine, nonbinary, masculine, or no gendered labels.

The researchers requested participants to partake in a familiar hypothetical scenario known as the “Prisoner’s Dilemma,” a game where two individuals opt either to collaborate or operate separately. Mutual cooperation results in the most favorable result for both.

You may like

  • Some people admire AI, while others detest it. Here’s the explanation.

  • Increased AI usage correlates with a higher likelihood of individuals overestimating their personal capabilities

  • AI generated voices are now virtually impossible to differentiate from authentic human voices

However, should one opt for cooperation and the other refrains, the non-cooperating party benefits more, generating an enticement to “exploit” the partner. If both elect against cooperating, both conclude with low scores.

The study results indicated that subjects were roughly 10% more inclined to exploit AI as opposed to a human counterpart. In addition, findings indicated that participants showed an increased propensity for collaboration with feminine, nonbinary, and gender-neutral partners when contrasted with masculine ones, anticipating reciprocal cooperation.

The study revealed that reduced likelihood of cooperating with masculine partners was attributable to doubts regarding their cooperative tendencies — specifically noted among female participants, who exhibited heightened collaborative behavior with “feminine” agents over male-identified agents, a phenomenon referred to as “homophily.”

“Observed skewed perspectives during human interactions with AI entities are poised to shape the development, potentially maximizing user involvement and building reliance in relation to automated systems,” stated the researchers in their report. “Those designing these systems must acknowledge and actively address unwanted biases in interpersonal exchanges to mitigate them within interactive AI agent designs.”

The risks of anthropomorphizing AI agents

Non-cooperation among participants was rooted in a couple of principal causes. Initially, they anticipated their counterparts to act uncooperatively, seeking to prevent a reduced score. Alternatively, they anticipated that the other participant would choose to cooperate, where a solo choice would lessen the odds of a low score, albeit at the other person’s expense. This latter scenario was classified as exploitation by the researchers.

Participants displayed a greater readiness to “exploit” their counterparts when these carried feminine, nonbinary, or gender-neutral assignments instead of male ones. This exploitation tendency was heightened if their counterpart was AI. Men exhibited a stronger tendency to exploit partners while simultaneously demonstrating a preference for cooperating with human partners over AI. Women manifested a stronger inclination towards cooperation than men and showed no differentiation between human and AI partners.

The research lacked an adequate number of participants identifying beyond the female and male spectrum to establish conclusions regarding interactions between other genders and gendered human and AI partners.

You may like

  • Some people admire AI, while others detest it. Here’s the explanation.

  • Increased AI usage correlates with a higher likelihood of individuals overestimating their personal capabilities

  • AI generated voices are now virtually impossible to differentiate from authentic human voices

The study noted that increasingly, AI resources are being presented with humanized qualities (endowing them with human-like qualities such as gender and identity) to cultivate user trust and interaction.

However, applying human-like characteristics to AI without regard to the ways gender-based bias impacts interactions could reinforce ingrained discriminatory patterns, thus exacerbating those tendencies.

RELATED STORIES

—Latest study indicates AI ‘perceives’ emotions more effectively than humans — particularly within emotionally tense scenarios

—Scientists pinpoint substantial divergences in human and AI ‘cognitive processes’ — with potentially impactful outcomes

—Researchers implemented an ‘internal monologue’ for AI, producing significant improvements in performance

Currently, while many existing AI constructs involve online chatbots, individuals might foreseeably share roadways with autonomous vehicles or rely on AI to oversee their work schedules. This portends the necessity of cooperating with AI in similar ways we currently anticipate cooperating with other people, rendering the understanding of AI gender bias critically important.

“Although reflecting prejudicial attitudes directed towards gendered AI agents might not signify a substantial ethical conundrum intrinsically, it might encourage undesirable practices, amplifying pre-existing gender discrimination within our societies,” the researchers added.

“By assimilating the fundamental patterns of prejudice coupled with user insight, designers are equipped to advance towards formulating impactful, reliable AI systems poised to adequately address their users’ needs, furthering and conserving constructive societal values inclusive of equality and justice.”

Damien PineLive Science contributor

Damien Pine (he/him) is a freelance author, creator, and former NASA engineer. He pens articles touching on scientific concepts, physics, technology, artistry, and other subjects, geared toward simplifying complex matters. Possessing a degree in mechanical engineering attained from the University of Connecticut, his excitement peaks each time he encounters a feline.

Show More Comments

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

LogoutRead more

Some people admire AI, while others detest it. Here’s the explanation. 
 

Increased AI usage correlates with a higher likelihood of individuals overestimating their personal capabilities 
 

AI generated voices are now virtually impossible to differentiate from authentic human voices 
 

Switching off AI’s ability to lie makes it more likely to claim it’s conscious, eerie study finds 
 

Being mean to ChatGPT increases its accuracy — but you may end up regretting it, scientists warn 
 

Experts divided over claim that Chinese hackers launched world-first AI-powered cyber attack — but that’s not what they’re really worried about 
 Latest in Artificial Intelligence

Do you think you can tell an AI-generated face from a real one? 
 

Leave a Reply

Your email address will not be published. Required fields are marked *