• If Laksaboy Forums appears down for you, you can google for "Laksaboy" as it will always be updated with the current URL.

    Due to MDA website filtering, please update your bookmark to https://laksaboyforum.xyz

    1. For any advertising enqueries or technical difficulties (e.g. registration or account issues), please send us a Private Message or contact us via our Contact Form and we will reply to you promptly.

X and TikTok issued letters of caution by IMDA for serious weaknesses in detection, removal of harmful content

LaksaNews

Myth
Member
SINGAPORE: The Infocomm Media Development Authority (IMDA) has issued letters of caution to X and TikTok and placed both platforms under enhanced supervision, following “serious weaknesses” in their measures to detect and remove harmful online content.

IMDA’s Online Safety Assessment Report 2025, released on Tuesday (Mar 31), found that X failed to effectively tackle child sexual exploitation and abuse material (CSEM), while TikTok showed serious gaps in detecting and removing terrorism-related content.

According to IMDA, a letter of caution reprimands regulated entities on serious weaknesses discovered by the regulator and requires them to take steps to address these weaknesses.

Under the enhanced supervision measures, the two platforms must provide regular progress updates to IMDA in implementing rectification measures, until the regulator is satisfied that the issues are adequately resolved.

CNA Games
Show More Show Less
The measures include enhancing their detection systems with the use of AI, and taking into account specific code-words and tactics that bad actors have used to target Singapore users.

The two platforms have also agreed to train human reviewers to better identify CSEM and terrorism content related to Singapore, and improve information sharing with the tech community and regulators, said IMDA.

They must also submit supporting data and information to IMDA by Jun 30 to demonstrate the effectiveness of their rectification measures.

"Should X or TikTok fail to satisfy IMDA that they have improved the effectiveness of their measures to address the specific types of CSEM and terrorism content that IMDA has detected, IMDA will not hesitate to explore further options, including potential regulatory action under the Broadcasting Act," said the authority.

Under Singapore’s Broadcasting Act, those who fail to comply may face a fine of up to S$1 million, with further fines for continuing offences.

Both platforms have accepted IMDA’s findings and have committed to rectifying the issues, said IMDA.

SHARP RISE IN HARMFUL CONTENT​


IMDA found that X failed to curb a sharp rise in CSEM cases involving Singapore users. The number of such cases more than doubled from 33 in 2024 to 73 in 2025, despite the regulator informing the platform of the issue in 2024 and sharing detection indicators with them.

The cases of content sharing or linking to CSEM – 68 per cent of the 2025 cases – involved coordinated networks of accounts using terms commonly associated with CSEM, with links that direct Singapore users to external sites hosting CSEM.

The remaining 32 per cent of 2025 cases were self-generated CSEM. These involved accounts sharing posts with self-generated explicit sexual imagery from users in Singapore who were purportedly under 18.

These users included commonly known number patterns on their profile indicating their date of birth or age.

The report also found that a majority of these 73 cases were also on the platform for an extended period of time, ranging from nine to 31 weeks on average.

"X therefore had ample time to proactively detect and remove these cases," said IMDA in its report.

IMDA pointed out that all 73 cases violated X’s own policies against CSEM however, X only removed them after they were flagged by IMDA.

Last year, IMDA detected terrorism content shared by Singapore-based accounts for the first time.

There were 17 such cases of terrorism content shared by Singapore-based accounts on TikTok that were not proactively detected and removed in 2025.

Some videos used edited footage or audio linked to known terrorist organisations that were blended with benign content. In some cases, the terrorism-related audio was also concealed under the platform's "original sound" label, which adds it to TikTok's database for others to use in their posts as well.

According to the report, there were instances where TikTok initially assessed the content as non-violating even after user reports. They were removed only after IMDA intervened.

This demonstrates the weakness of TikTok's user reporting system, said the authority.

IMDA stressed that CSEM and terrorism content are “very egregious harms” that must be proactively detected and swiftly removed before users encounter them, as required under Singapore’s Code of Practice for Online Safety for social media services.

In response to IMDA’s findings, TikTok said it does not allow violent and hateful organisations or individuals on its platform.

It added that in the fourth quarter of 2025, its proactive systems successfully removed over 99 per cent of content violating its policy before it was reported, with over 93 per cent of content being removed within 24 hours.

“As violent extremist methodologies and evasion techniques continue to evolve, so do we. We continually evaluate and strengthen our policies and systems, and are committed to partnering with IMDA under the enhanced supervision scheme,” said TikTok in IMDA's report.

X said it maintains a zero tolerance policy towards child sexual exploitation material, adding that it is deeply committed to child safety.

"We have made meaningful progress in restricting minors’ access to adult content on X, and warmly welcome IMDA’s recognition of this," said the platform in IMDA's report.

"We take our responsibility to young users very seriously and continue to strengthen safeguards in this area."

Related:​


CONCERNS ABOUT CHILD SAFETY MEASURES​


The latest report builds upon the baseline safety measures established in last year's inaugural report, and highlights the areas of weakness the designated social media services - Facebook, HardwareZone, Instagram, TikTok, X and YouTube - need to address, as well as improvements they have made over the past year.

While the platforms have made some improvements since 2024, the latest report raised concerns about child safety measures, noting that children could still access age-inappropriate content on several platforms.

Facebook and YouTube were found to have instances where children’s accounts could view content involving nudity or sexual themes, while HardwareZone’s age restrictions were easily bypassed before recent improvements were introduced in end-January.

Although Instagram and TikTok reported more comprehensive child safety features, IMDA said the effectiveness of such measures varied greatly across platforms.

“Given the rapidly evolving online safety risk landscape, especially for children, designated social media services must continue to prioritise enhancing the comprehensiveness and effectiveness of their measures to minimise children’s exposure to harmful and age-inappropriate content,” IMDA said in its report.

More broadly, IMDA found that sexual content, violent content and cyberbullying content were the top three types of harmful content removed by the social media platforms proactively and/or as a result of user reports.

Despite these issues, IMDA said there had been progress since the first report, particularly in faster response times to user reports, which averaged between two and five days in 2025. In 2024, the average time to action was between three and 10 days.

SCORING FOR ONLINE SAFETY​


The report aims to help users, including parents, decide for themselves and their children about the risks and available safety measures on various social media sites.

The six social media services were evaluated on how thoroughly and effectively their measures met the Code of Practice for Online Safety.

While none of the six designated social media services achieved an overall rating of five out of five, Facebook, Instagram and YouTube maintained relatively stable overall ratings, with improvements in how quickly they respond to user reports.

HardwareZone also saw slight gains, being the only platform to score full marks in user reporting and resolution.

However, TikTok’s overall rating dipped from 2024, dragged down by the effectiveness of its reporting and resolution mechanisms, which declined significantly in 2025, according to the report.

Instagram and TikTok scored full marks in user safety measures for children, while HardwareZone scored the lowest with 2.5 out of five points.

X, while improving in some areas such as restricting children’s access to adult sexual content, continued to show weak performance in proactively detecting CSEM in 2025.

Transparency remained another weak point. Facebook, Instagram and YouTube were unable to provide detailed Singapore-specific data on how quickly they respond to user reports, limiting users’ ability to assess platform safety.

IMDA said it will continue to monitor platforms closely and review the relevance of its regulations as online threats evolve, adding that it is also studying how to strengthen online safety requirements for children.

Continue reading...
 
Back
Top