Workplaces have been talking about AI in terms of efficiency and automation, but a different conversation is becoming urgent. The recent Grok incident, where the platform generated explicit and sexualised deepfakes of women from ordinary photos, showed how easily someone’s dignity can be violated with a single prompt.
This matters directly to Internal Committees because the nature of sexual harassment is changing faster than workplace frameworks.
The PoSH Act may not use the word “AI,” but the core principles remain the same. Unwelcome conduct, violation of consent, and harm to dignity do not require physical presence. They do not require conversation.
They do not even require the accused and the complainant to be in the same room. AI has changed the ease, speed and scale of harm, and ICs need to be prepared to respond within the law’s intent, not only its older examples.
AI enabled harm still falls under PoSH
For ICs, the most important shift is recognising that digital violations are not outside the law. When an employee’s image is morphed into explicit content without consent, the act meets the definition of sexually tinted behaviour, regardless of whether the image is artificial.
The Grok incident proved that deepfakes are no longer fringe behaviour. They can be generated casually, shared widely and weaponised quickly.
ICs cannot dismiss such cases as “online issues” or “personal social media problems.” If the impact is experienced by an employee, and it affects their safety, dignity or ability to function at work, it is a workplace concern. The fact that the act happened outside the office, on a personal device or through an external platform does not remove organisational responsibility.
The PoSH Act has always covered behaviour that arises out of or in the course of work, and digital harm easily fits within this scope when colleagues are involved or the impact reaches the workplace.
Consent and evidence require a new lens
AI-enabled harassment challenges how ICs traditionally understand consent and evidence. Consent can now be violated without physical interaction. A person does not have to be present. Their image alone can be misused.
ICs must treat this as a serious boundary breach, not a technical misunderstanding.
Evidence also becomes more complex. Deepfakes blur the line between real and synthetic content. But ICs are not courts. Their responsibility is not to conduct a forensic analysis. It is to evaluate whether the employee experienced unwelcome, harmful conduct and whether the respondent’s behaviour created fear, humiliation or reputational risk.
The authenticity of the image is only one part of the assessment. The intention, the context, the impact and the pattern of behaviour matter equally.
Deepfake cases also present a new form of denial. Respondents may argue that because the image is “not real,” it should not be treated as harassment. ICs must remember that the PoSH law is centred on impact. The harm is real even when the image is artificial.
Why ICs must respond proactively
Internal committees cannot afford to treat AI cases as exceptions or grey zones. They must build clarity now because the frequency of such incidents is rising. Employees need to know that if AI is misused to harm them, the IC understands the gravity and will act with sensitivity and confidentiality.
ICs should care because the emotional impact of digital violations is often deeper.
A deepfake does not stay in one place. Once shared, it can circulate multiple times without the person’s knowledge. The fear of discovery, the shame and the anxiety follow the employee everywhere, including into the workplace. It affects their ability to participate, collaborate and feel safe.
If internal processes hesitate or appear unsure, employees lose trust not only in the IC but in the organisation’s commitment to safety. Responding firmly also prevents escalation. AI misuse often starts as “experimentation,” “jokes” or “curiosity.” Clear boundaries stop these behaviours from becoming patterns.
The PoSH act is grounded in dignity, equality and safety, not outdated definitions of what harassment should look like. AI is new, but harm is not. The responsibility of the IC is to interpret the law in the spirit of protection, not in the limitations of old examples.
Finally, ICs should care because the workplace is changing. Employees live hybrid lives where personal digital spaces are connected to professional reputations. Misconduct in one space can easily impact the other. Ignoring this overlap leaves employees unprotected.
AI will continue evolving, and deepfake capabilities will become more accessible. But the fundamental responsibility of the IC remains the same: to protect dignity and respond to unwelcome behaviour with clarity and fairness.
Recognising AI-enabled violations as legitimate PoSH cases is not an expansion of the law. It is an alignment with the reality of how harm now happens.
At Serein, we support organisations and ICs in navigating these emerging forms of misconduct with structured processes, legal clarity and empathy. If your IC would like guidance on handling AI-related cases or updating internal frameworks, write to us at hello@serein.in