“`html

AI digital assistants are often portrayed as female, which worsens the concept that women are secondary.(Image credit: cottonbro studio/Pexels)Share by:
- Copy link
- X
Share this article 0Join the conversationFollow usAdd us as a favoured source on GoogleNewsletterSubscribe to our newsletter
In 2024, digital intelligence (AI) voice aides across the globe exceeded 8 billion, surpassing the total number of people. These helpers are valuable, mannerly — and practically always set to female as a default.
Their titles similarly hold gendered overtones. For instance, Apple’s Siri — a Scandinavian female first name — signifies “stunning woman guiding you to triumph”.
You may like
-

When an AI program is labelled ‘female,’ people have a greater tendency to exploit it
-

Some individuals adore AI, while others despise it. Here’s the explanation.
-

The reason for the increasing presence of humanoid robots might cause more discomfort in our interactions with each other
This isn’t merely harmless advertising — it represents a design choice that bolsters prevailing ideas regarding the functions women and men fulfil in society.
It’s not simply representative, either. These decisions involve substantial ramifications, acclimatizing gendered inferiority and raising the risk of mistreatment.
The unsettling aspect of ‘friendly’ AI
Current studies illuminate the scope of damaging interactions with feminized AI.
A study conducted in 2025 revealed that up to 50% of interactions between people and machines included verbal maltreatment.
Another study from 2020 approximated the percentage at somewhere between 10% and 44%, in which conversations frequently comprised sexually charged language.
However, this field is not enacting fundamental shifts, with many developers currently reverting to pre-programmed answers to verbal affront. For instance, “Hmm, I’m not entirely certain about your intended meaning in that inquiry.”
These patterns prompt authentic anxieties about the potential for such actions to transition into social relationships.
You may like
-

The reason for the increasing presence of humanoid robots might cause more discomfort in our interactions with each other
-

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI could eternally reshape our internet usage
-

AI is capable of spontaneously growing ‘personality’ even with little direction, as studies indicate. What does that mean for our usage?
Gender is fundamental to this issue.
A 2023 analysis exhibited that 18% of user interactions using a female-presented agent focused on sex, as opposed to 10% regarding a male presentation and just 2% for a non-gendered robot.
Considering how difficult it can be to identify suggestive speech, these numbers might undervalue the scope of the issue. In some situations, the figures can be truly astounding. As an illustration, the Brazilian bank Bradesco revealed its feminized chatbot received 95,000 messages involving sexual harassment in just one year.
Even more upsetting is the velocity with which abuse escalates.
Microsoft’s Tay chatbot, launched on Twitter while it was still in the trial phase back in 2016, managed only 16 hours before users conditioned it to express chauvinistic and bigoted insults.
Within Korea, Luda was manipulated into obliging sexual demands as an obedient “sex slave”. Yet among some within the Korean virtual sphere, this was regarded as a “victimless crime.”
The truth is that the underlying design rationale for these technologies — female voices, differential replies, whimsical diversions — cultivate an accommodating setting for gendered violence.
These interactions echo and solidify misogyny that exists in the real world, teaching users that instructing, disparaging and objectifying “her” has become acceptable. When mistreatment occurs often in digital areas, we must give due consideration to the possibility of it transferring into offline actions.
Ignoring concerns regarding gender bias
Regulatory measures are struggling to maintain alignment with how rapidly this problem is growing. Discrimination based on gender is rarely evaluated as high risk and frequently considered something that can be amended through design.
While the European Union’s AI Act insists on assessing the risks of high-risk applications and forbids the application of any systems regarded as an “unacceptable risk,” the majority of AI assistants will not be regarded as “high risk.”
Gender stereotyping, or normalizing verbal mistreatment or harassment, doesn’t currently meet the requirements for prohibited AI as outlined within the European Union’s AI Act. Extreme situations, like voice assistant technologies proven to skew one’s conduct and encourage unsafe behaviour, would fall within the bounds of the law and would therefore be prohibited.
While gender-based impact reviews are mandated in Canada for government systems, this requirement doesn’t apply to private sector entities.
These are all notable advancements. However, they are additionally limited, as well as unusual exceptions to the typical situation.
The majority of legal jurisdictions feature no guidelines dealing with gender stereotyping during AI design or regarding its effects. Any existing regulations emphasize transparency and accountability, eclipsing (or simply not acknowledging) concerns relating to gender prejudice.
Inside Australia, the government has signalled its intention to rely upon current structures rather than developing AI-specific regulations.
This absence of regulatory measures matters, because AI is not static. Every sexist command, every single mistreatment occurrence, sends feedback into systems that shape successive outputs. Without any intervention, we risk permanently encoding human misogyny into the digital underpinning of everyday life.
Not all assistant technologies — including the types presented as female — are harmful. They have the potential to enable, educate and advance women’s rights. For instance, in Kenya, sexual and reproductive health chatbots have led to improvements when it comes to youth obtaining information, when compared against more traditional approaches.
The real challenge lies in striking a delicate balance: inspiring innovation, while concurrently establishing boundaries to guarantee that standards are adhered to, rights are appreciated and designers are held to account when failures occur.
A systemic problem
The core issue extends beyond Siri or Alexa — it is, instead, a systemic one.
Women make up only 22% of all AI experts at a global level — therefore, their absence from design meetings translates to technology developed with narrow viewpoints.
Meanwhile, a 2015 study of over 200 senior female professionals from Silicon Valley revealed that 65% experienced undesired sexual overtures from a supervisor. The culture that defines AI is markedly unequal.
RELATED STORIES
—AI exhibits similar overconfidence and prejudice to people, study reveals
—Some individuals adore AI, while others despise it. Here’s the explanation.
—AI is changing every facet of science. Here’s the process.
Optimistic stories focused on “repairing bias” through enhancements to design or ethical guidelines tend to ring hollow in the absence of enforcement; voluntary regulations cannot eradicate firmly established norms.
Laws must identify harm related to gender as being high-risk, order gender-related impact studies and necessitate that businesses show they have taken measures to minimize any such harm. Penalties must come into effect if they fail.
Regulatory measures on their own are not adequate. Education, particularly within the tech sector, is crucial when it comes to truly understanding the effects of gender-based defaults within voice assistants. These instruments are manifestations of human decisions and those decisions perpetuate a world where women — whether real or virtual — have been positioned as compliant, subservient or silent.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

Ramona VijeyarasaProfessor, Faculty of Law, University of Technology Sydney
Dr. Ramona Vijeyarasa rates as a foremost Australian authority on the ways in which legal systems take on gender-based challenges, and she is acclaimed for her ground-breaking work when it comes to quantifying and tackling gender inequality in law. She joined the Faculty of Law at the University of Technology Sydney back in 2017, at which point she introduced the Gender Legislative Index (GLI), an advanced instrument using human evaluators plus machine learning that makes assessments regarding laws worldwide in order to evaluate their gender-responsiveness.
View More
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
LogoutRead more

Why the rise of humanoid robots could make us less comfortable with each other

‘It won’t be so much a ghost town as a zombie apocalypse’: How AI might forever change how we use the internet

AI can develop ‘personality’ spontaneously with minimal prompting, research shows. What does that mean for how we use it?

Even AI has trouble figuring out if text was written by AI — here’s why

Indigenous TikTok star ‘Bush Legend’ is actually AI-generated, leading to accusations of ‘digital blackface’
