SINGAPORE: In recent weeks, generative artificial intelligence (AI) tools have crossed a troubling line. X’s AI chatbot, Grok, introduced image-editing features that allow users to create sexualised images of real people using simple text prompts. With minimal effort, women found their faces and bodies manipulated, stripped and circulated online without consent.
The backlash was swift. Governments in Southeast Asia, namely Indonesia, Malaysia and the Philippines blocked access to Grok, while regulators and civil society groups questioned why such features were released in the first place.
The episode has sparked a wider debate – not just about AI misuse, but about whether platforms treat safety as a core responsibility, or as an afterthought addressed only once harm has occurred.
The Grok scandal should not surprise us. Features like “Spicy Mode” and permissive image-editing tools were not accidents – someone proposed, and someone else approved. Too often, “innovation” is used as a shield for negligence, as if harm is an unfortunate byproduct of progress rather than a predictable outcome of design decisions.
Experts have noted that Grok is more lax than most mainstream AI platforms. Other companies have shown that guardrails are technically possible, even if they involve trade-offs. This underscores an uncomfortable truth: AI safety is largely a design choice. When safeguards are absent, it reflects priorities that warrant scrutiny.
Predictably, the harm does not fall evenly. While anyone can be targeted, non-consensual sexualised deepfakes overwhelmingly affect women. A 2023 study found that 99 per cent of deepfake pornography depicts women. This is misogyny at scale, amplified by tools that reward speed and virality over restraint and responsibility.
Some have tried to dismiss the harm of sexualised AI images as “fake”, “fictional” or “just art”. But when AI tools can be used against the consent of women and children, the line is not just crossed – it is erased. To frame regulation as an attack on free speech, as Elon Musk has done, is to willfully misunderstand what is at stake.
Research shows that the non-consensual sharing of sexualised images is associated with psychological harm comparable to sexual violence. At SG Her Empowerment (SHE), data from our SHECARES support centre echoes this.
Survivors describe profound feelings of violation, shame, and disgust – emotions no one should have to justify. Our bodies are our own. Seeing them humiliated in a public digital space creates trauma that lingers long after content is removed.
My chairperson was the subject of sexualised deepfakes, and I witnessed the fear and anxiety she carried. When I received her message saying deepfakes had been made in her likeness, my reaction was disbelief, followed by dread. We work in this space. We know the reporting processes, the laws and the gaps. And still, the thought of it happening to one of us made my stomach drop.
Survivor perspectives are critical because they understand how systems fail in practice, not just in theory. They see the loopholes that get exploited, the tactics abusers use, and the long tail of harm that persists even after content is taken down.

X responded to criticism by geoblocking harmful features where they were illegal. This may improve compliance on paper, but it exposes a deeper problem. When safety standards differ across borders, platforms can toggle protections on and off depending on legal risk. But how can human dignity be jurisdictional?
Poorly calibrated bans risk pushing abuse underground, into unregulated spaces beyond enforcement – which is why knee-jerk calls to shut platforms down may feel decisive but rarely deliver lasting safety.
Singapore’s approach to online safety has been measured. The recent passing of the Online Safety (Relief and Accountability) Act (OSRA) prioritises speed of takedown, expands avenues for redress, and places clearer obligations on moderators and platforms.
The Criminal Law (Miscellaneous Amendments) Bill 2025 goes further by explicitly recognising AI-facilitated sexual abuse, including synthetic and manipulated content. Together, they send a clear signal: Harm does not stop being harm simply because it is generated by an algorithm.
But laws alone are not enough.
When a sexualised deepfake is created, responsibility does not rest solely with an anonymous user. It extends across the entire chain – to those who design the tools, deploy them without safeguards, and profit from engagement while externalising harm. If fixes could be implemented quickly, what prevented them from being in place before the damage was done?
If we allow this moment to pass without drawing firm lines, the next phase is easy to imagine.
Deepfake creation will soon be as effortless as taking a photo on a smartphone. Harassment will become automated, scalable and harder to trace. What is shocking today will become routine tomorrow – dismissed as “just part of being online”.
Younger generations will grow up in a digital environment where violation is a risk to be managed, not prevented.
This is not a fight against technology. It is a fight against callousness – against the normalisation of harm in the name of progress.
The goal is simple but urgent. We must agree on a shared standard of conduct – one where dignity is non-negotiable, safety is not optional, and innovation does not come at the cost of dehumanising others. Governments, civil society, parents and young people all have a role to play in shaping this shared standard, while tech companies must have clear and enforceable responsibilities to uphold it.
If we fail to come to a consensus, the consequences are clear. Harm will continue to be normalised, responsibility diluted, and dignity treated as negotiable. No amount of clever engineering will save us from the damage we choose to ignore.
How Kay Lii is Chief Executive Officer of SG Her Empowerment (SHE).
Continue reading...
The backlash was swift. Governments in Southeast Asia, namely Indonesia, Malaysia and the Philippines blocked access to Grok, while regulators and civil society groups questioned why such features were released in the first place.
The episode has sparked a wider debate – not just about AI misuse, but about whether platforms treat safety as a core responsibility, or as an afterthought addressed only once harm has occurred.
WHEN DESIGN CHOICES ENABLE HARM
The Grok scandal should not surprise us. Features like “Spicy Mode” and permissive image-editing tools were not accidents – someone proposed, and someone else approved. Too often, “innovation” is used as a shield for negligence, as if harm is an unfortunate byproduct of progress rather than a predictable outcome of design decisions.
Experts have noted that Grok is more lax than most mainstream AI platforms. Other companies have shown that guardrails are technically possible, even if they involve trade-offs. This underscores an uncomfortable truth: AI safety is largely a design choice. When safeguards are absent, it reflects priorities that warrant scrutiny.
Predictably, the harm does not fall evenly. While anyone can be targeted, non-consensual sexualised deepfakes overwhelmingly affect women. A 2023 study found that 99 per cent of deepfake pornography depicts women. This is misogyny at scale, amplified by tools that reward speed and virality over restraint and responsibility.
WHY “JUST ART” IS A DANGEROUS LIE
Some have tried to dismiss the harm of sexualised AI images as “fake”, “fictional” or “just art”. But when AI tools can be used against the consent of women and children, the line is not just crossed – it is erased. To frame regulation as an attack on free speech, as Elon Musk has done, is to willfully misunderstand what is at stake.
Research shows that the non-consensual sharing of sexualised images is associated with psychological harm comparable to sexual violence. At SG Her Empowerment (SHE), data from our SHECARES support centre echoes this.
Survivors describe profound feelings of violation, shame, and disgust – emotions no one should have to justify. Our bodies are our own. Seeing them humiliated in a public digital space creates trauma that lingers long after content is removed.
My chairperson was the subject of sexualised deepfakes, and I witnessed the fear and anxiety she carried. When I received her message saying deepfakes had been made in her likeness, my reaction was disbelief, followed by dread. We work in this space. We know the reporting processes, the laws and the gaps. And still, the thought of it happening to one of us made my stomach drop.
Survivor perspectives are critical because they understand how systems fail in practice, not just in theory. They see the loopholes that get exploited, the tactics abusers use, and the long tail of harm that persists even after content is taken down.
Related:

DIGNITY MUST BE UNIVERSAL
X responded to criticism by geoblocking harmful features where they were illegal. This may improve compliance on paper, but it exposes a deeper problem. When safety standards differ across borders, platforms can toggle protections on and off depending on legal risk. But how can human dignity be jurisdictional?
Poorly calibrated bans risk pushing abuse underground, into unregulated spaces beyond enforcement – which is why knee-jerk calls to shut platforms down may feel decisive but rarely deliver lasting safety.
Singapore’s approach to online safety has been measured. The recent passing of the Online Safety (Relief and Accountability) Act (OSRA) prioritises speed of takedown, expands avenues for redress, and places clearer obligations on moderators and platforms.
The Criminal Law (Miscellaneous Amendments) Bill 2025 goes further by explicitly recognising AI-facilitated sexual abuse, including synthetic and manipulated content. Together, they send a clear signal: Harm does not stop being harm simply because it is generated by an algorithm.
But laws alone are not enough.
When a sexualised deepfake is created, responsibility does not rest solely with an anonymous user. It extends across the entire chain – to those who design the tools, deploy them without safeguards, and profit from engagement while externalising harm. If fixes could be implemented quickly, what prevented them from being in place before the damage was done?
Related:
A FIGHT AGAINST CALLOUSNESS
If we allow this moment to pass without drawing firm lines, the next phase is easy to imagine.
Deepfake creation will soon be as effortless as taking a photo on a smartphone. Harassment will become automated, scalable and harder to trace. What is shocking today will become routine tomorrow – dismissed as “just part of being online”.
Younger generations will grow up in a digital environment where violation is a risk to be managed, not prevented.
This is not a fight against technology. It is a fight against callousness – against the normalisation of harm in the name of progress.
The goal is simple but urgent. We must agree on a shared standard of conduct – one where dignity is non-negotiable, safety is not optional, and innovation does not come at the cost of dehumanising others. Governments, civil society, parents and young people all have a role to play in shaping this shared standard, while tech companies must have clear and enforceable responsibilities to uphold it.
If we fail to come to a consensus, the consequences are clear. Harm will continue to be normalised, responsibility diluted, and dignity treated as negotiable. No amount of clever engineering will save us from the damage we choose to ignore.
How Kay Lii is Chief Executive Officer of SG Her Empowerment (SHE).
Continue reading...
