• If Laksaboy Forums appears down for you, you can google for "Laksaboy" as it will always be updated with the current URL.

    Due to MDA website filtering, please update your bookmark to https://laksaboyforum.xyz

    1. For any advertising enqueries or technical difficulties (e.g. registration or account issues), please send us a Private Message or contact us via our Contact Form and we will reply to you promptly.

Commentary: The AI-fuelled child exploitation crisis is global – so must be our response

LaksaNews

Myth
Member
SINGAPORE: Child sexual exploitation is one of the most universally reviled crimes in society.

While governments are implementing regulations to stem this abuse, the problem is a transnational one. A recent regional cross-border operation which led to over 400 arrests – including 21 men in Singapore – laid bare this disturbing reality.

This is not only a law enforcement issue; it is a collective challenge to update our moral, legal, and technological defences against a crime that is growing faster than most systems can keep up.

GLOBAL DEMAND AND LOCAL HARMS​


We know the problem is exacerbated by technologies like artificial intelligence. But it's hard to grasp just how shocking the scale of this ecosystem is.


Earlier in April, an international operation took down the dark web platform Kidflix, which hosted more than 91,000 videos of child abuse and had 1.8 million users worldwide. Europol said 79 people had been arrested and about 1,400 suspects identified.

The sheer size of the network shows how deeply rooted the problem has become. It underlines the enormity of the task to detect and expunge child sexual abuse material (CSAM), disrupt these distribution networks and prosecute the perpetrators.

Even if the material is produced elsewhere, its reach extends to even those at home.

In Singapore, the 21 men who were arrested are being investigated for suspected involvement in producing, possessing and distributing child sexual abuse materials, sexual assault, sexual communication with a minor, and possessing obscene films.

Technology has transformed how abuse is produced and shared. An Interpol study conducted during the pandemic found an increase in the distribution of such content on peer-to-peer networks, social media and darknet forums.

However, the drivers behind child sexual abuse remain stubbornly familiar. In many parts of Southeast Asia, the lure of “easy money” and criminal syndicates continue to fuel the production of CSAM. The reach of digital platforms simply amplifies that harm.

Egregiously, perpetrators tend to be close relatives or known acquaintances who exploit victims for financial gain. Digital literacy and public education are also needed to sensitise children and their families to the risks of online sexual exploitation and its long-term adverse effects.

Related:​


VICTIMS NEED FAST TAKEDOWNS AND HUMAN SUPPORT​


Just as urgent is the task of supporting and protecting victims.

For survivors, including those whose likenesses are used to create non-consensual, sexualised content, the harm is not abstract. It is deeply personal, long-lasting, and often isolating.

Singapore’s new Online Safety Commission, which will be established in 2026, marks an important step forward. It will give victims a clearer path to report abuse and seek redress.

Equally vital is the work of SHECARES@SCWO, Singapore’s first support centre for victims of online harms, which provides emotional support, legal advice and assistance with content removal.

Still, more must be done. Survivors of image-based sexual abuse (IBSA) often live in fear that their manipulated image will resurface without warning. As long as such content circulates, the risk to children endures and survivors also remain vulnerable to re-traumatisation.

istock-901067906.jpg

Survivors of image-based sexual abuse often live in fear that their manipulated image will resurface without warning. (Photo: iStock/takasuu)

A NEW FORM OF CHILD ABUSE, POWERED BY AI​


AI is also enabling perpetrators to create synthetic CSAM that appears lifelike but is entirely fake and disturbingly accessible.

As a society, we must not treat synthetic abuse as somehow less serious.

In 2023, a man in South Korea was convicted and jailed for producing and distributing AI-generated sexual images of minors. It was a landmark case that affirmed what many instinctively understand: The harm lies not just in the image, but in the sexualisation of children and the erasure of their dignity.

These materials may be machine-generated, but they are circulated and consumed with human intent. Experts warn that such content may reinforce deviant interests, lower psychological inhibitions and trigger real-world abuse. The belief that synthetic CSAM can serve as a safe outlet is not only unsubstantiated, but also dangerous.

As AI tools become easier to access and their use harder to detect, the line between real and synthetic abuse becomes increasingly blurred. This exacerbates the impact on children and survivors.

Algorithms on mainstream porn platforms may escalate exposure to increasingly violent or extreme content. Algorithmic drift can subtly nudge users towards harmful material, including CSAM, especially in the absence of robust safeguards. In such cases, AI does not merely reflect demand – it risks shaping it.

Related:​





HARNESSING AI FOR GOOD​


AI can also play a vital role in fighting back. Emerging tools can detect known and novel forms of child sexual abuse material, flag synthetic content that mimics real individuals, and support platforms in removing harmful content more efficiently.

Advanced image detection algorithms, when trained responsibly, can identify patterns across massive volumes of content that human moderators would overlook. Microsoft makes freely available its PhotoDNA tool for technology companies and law enforcement agencies to detect and remove millions of child exploitation images.

Some systems now go beyond matching known CSAM to spotting previously unseen or synthetically generated images, making them especially important in cases involving deepfakes or manipulated likenesses. Encouragingly, industry alliances such as the Tech Coalition have brought together technology companies to deploy existing tools, develop new innovations and accelerate knowledge sharing to more effectively stamp out CSAM.

Finance companies are also leveraging AI to disrupt the financial networks that facilitate these crimes. Banks, fintech companies, and payment processors track atypical transaction patterns, such as small but frequent international payments or transactions linked to high-risk regions, surfacing them for further investigation.



WHAT MUST HAPPEN NEXT, URGENTLY AND TOGETHER​


Clearly, no country can face this crisis in isolation. The transnational nature of this abuse can only be addressed by a coordinated global response: updated legal frameworks, shared enforcement strategies and stronger accountability for platforms that host and spread harmful content.

Technology companies must improve content moderation and detection systems, including for Southeast Asian languages and cultures.

Victims deserve faster, more transparent removal processes, particularly for material involving children or manipulated likenesses. NGOs supporting victims on the ground are also stretched. More resources must be marshalled to empower them to continue this essential work more effectively.

We owe it to the children whose innocence is being stolen – image by image, click by click – to respond with clarity, conviction and collective resolve.

Dr Chew Han Ei is Senior Research Fellow at the Institute of Policy Studies, National University of Singapore, and a Board Member of SG Her Empowerment.

Prof Lim Sun Sun is Vice President, Partnerships and Engagement at the Singapore Management University and Lee Kong Chian Professor of Communication and Technology at its College of Integrative Studies.

A/P Carol Soon is Associate Professor at the Department of Communications and New Media (CNM) in the National University of Singapore and Vice Chair of the Media Literacy Council.


Continue reading...
 
Back
Top