Serein

Custom, gamified courses designed for your team’s context

Data-driven insights to personalise learning and boost performance

Expert-led, localised learning built on research and relevance

Featured

Curated insights and resources powering productive teams

Quick reads with practical insights for everyday work

Reports

In-depth research and analysis on workplace trends

Real stories showing impact and transformation

Conversations with experts shaping the future of work

Micro-learnings that spark learning and collaboration

Featured

A team of experts collaborating to make workplace better

Make an impact. 
Build the future

Explore our global client footprint and impact

Featured

When AI is used to cross the line

Share at:

Rethinking consent and sexual harassment in the age of artificial intelligence

Conversations about AI at work often focus on efficiency and innovation. Far less attention is given to how these same tools can interact with long-standing social harms, including sexual harassment and abuse. In a world where gender-based violence is already common, the misuse of AI has the potential to amplify harm in ways that organisations are not yet fully prepared for.

This is not about rejecting AI. It is about understanding how its capabilities change risk, and what organisations need to do to respond responsibly.

AI and the changing shape of harm

Sexual harassment did not begin with artificial intelligence. It has always been shaped by power, access, and social norms. What AI changes is not the intent behind harmful behaviour, but the ease with which it can occur.

Generative AI tools can create images, videos, and audio that appear realistic with very little effort. In most cases, these tools are used responsibly. In some cases, they are misused to create sexualised or explicit content using someone’s likeness without their consent. This can happen without direct interaction and sometimes without the person’s knowledge until harm has already occurred.

This shift requires organisations to move away from asking whether AI is “good” or “bad” and towards understanding how its use changes risk.

Rethinking consent in digital and synthetic contexts

Consent is often discussed in physical or interpersonal terms. AI introduces situations where consent can be violated without physical interaction.

When someone’s image, voice, or identity is used to generate sexual content without permission, the harm does not depend on whether the material is technically real. The loss of control, exposure, and potential reputational damage are experienced in very real ways.

Once such content is shared, it can be copied and redistributed repeatedly. Each circulation extends the impact. For the person affected, the boundary violation does not end when the content is created.

Why existing workplace frameworks struggle

Many workplace policies were developed for environments where misconduct was easier to observe and easier to place in time and space. AI-enabled harm does not always fit these assumptions.

When behaviour happens online, anonymously, or outside standard working hours, organisations may hesitate. There may be uncertainty about whether the behaviour falls within organisational responsibility or how it should be addressed.

The evolving role of platforms

AI platforms occupy a unique position in this landscape. Unlike traditional publishers, some generative AI tools are now both creating and distributing content. This dual role means they participate directly in the production and spread of potential harm.

Analysis of certain platforms has revealed concerning patterns. Reviews have found that substantial portions of AI-generated images, ranging from estimates of 40 to 65 percent on some platforms, contain sexualized content. When measured across millions of generated images, this represents production at a scale that would have been impossible before AI.

This changes the accountability equation. Organisations can no longer view platforms simply as neutral tools when the platforms themselves are generating and hosting harmful content.

Trust, evidence, and fair responses

AI-generated content challenges long-standing assumptions about evidence. Images, audio, and video were once treated as reliable indicators of what happened. Synthetic media has complicated that trust.

For organisations, this does not mean dismissing digital material altogether. It means adapting how evidence is evaluated. Responses need to consider context, intent, and impact rather than relying only on whether content can be proven to be artificial or authentic.

A fair response prioritises the experience of the person affected while maintaining due process.

Responsibility does not end with adoption

It is often said that AI tools are neutral and that misuse reflects individual behaviour. While tools do not have intent, design choices matter. Accessibility, speed, and anonymity influence how boundaries are crossed.

Recognising this does not mean rejecting AI. It means acknowledging that organisations play a role in setting expectations, defining acceptable use, and creating guardrails that support responsible behaviour.

What organisations can do now

Preparing for AI-enabled risks does not require abandoning existing frameworks. It requires extending them thoughtfully.

Practical steps include:

  • Updating harassment and acceptable use policies to explicitly include AI-enabled misconduct
  • Incorporating AI literacy into training, including discussion of misuse and impact
  • Treating digital and AI-enabled harm with the same seriousness as offline misconduct
  • Ensuring HR, legal, and IT teams are equipped to respond to emerging scenarios
  • Providing reporting mechanisms that feel credible and safe for employees

At Serein, we work with organisations to build this balance. By combining AI literacy with workplace frameworks and ethical safeguards, we support teams in moving forward with clarity and care.

If you would like to explore how your organisation can strengthen its approach to AI and harassment prevention, write to us at hello@serein.in.

Scroll to Top

Custom, gamified courses designed for your team’s context

Data-driven insights to personalise learning and boost performance

Expert-led, localised learning built on research and relevance

Diagnose your culture health to surpass global standards

Diagnose your culture health to surpass global standards

Reports

Diagnose your culture health to surpass global standards

Diagnose your culture health to surpass global standards

Diagnose your culture health to surpass global standards

Diagnose your culture health to surpass global standards

A team of experts collaborating to make workplace better

Make an impact. 
Build the future.

Explore our global client footprint and impact

Featured