SINGAPORE: More than four in five Singapore residents reported encountering harmful online content in the past year, according to findings from two surveys conducted by the Ministry of Digital Development and Information (MDDI).
Results from the Perceptions of Digitalisation Survey and Smart Nation Policy Perception Survey, released on Friday (Oct 10), revealed that content supporting illegal activity, such as scams, was the most frequently encountered harmful content.
This was followed by sexual content, violent content, cyberbullying and content causing racial or religious tension.
Frequently encountered harmful online content referred to content that participants came across at least a few times a month.
Fifty-seven per cent of respondents reported encountering harmful content on Facebook, while 46 per cent said they came across such content on YouTube.
Forty-one per cent of participants also indicated that they encountered harmful content on Instagram.
About 36 per cent came across such content on TikTok, 15 per cent on X and 13 per cent on Reddit.
For messaging apps, 38 per cent said they encountered harmful content on WhatsApp, while 22 per cent indicated encountering it on Telegram.
“While this may be a result of the relative reach of each platform, the prevalence across all platforms is cause for concern,” said MDDI in a press release on Friday.
The Perceptions of Digitalisation Survey was conducted from November 2024 to February 2025, while the Smart Nation Policy Perception Survey was carried out from March to May this year.
Each survey involved 2,008 Singaporeans and permanent residents aged 15 and above.
The samples for both surveys were representative of the resident population by gender, age and race, said MDDI.
According to the findings, about one in three respondents experienced harmful online behaviour.
Catfishing was the most common type of harmful online behaviour that respondents encountered, with most coming across it on WhatsApp and Facebook.
Catfishing is when someone creates a fake online identity to deceive others into a romantic relationship.
Seventy-one per cent of respondents said they had experienced catfishing, while 27 per cent reported experiencing unwanted sexual messages.
Online harassment was the third most common type of harmful online behaviour reported, with 16 per cent of respondents stating that they had experienced it.
Other common types of harmful behaviour included identity theft, offensive remarks and unwanted violent messages.
Most respondents who encountered harmful online behaviour blocked the user, and about 46 per cent of the participants reported the content or user to the platform.
However, when it came to harmful online content, participants tended not to report the content or the user to the platform.
More than four in five respondents skipped or closed the content, and 23 per cent of participants took no action.
“Such inaction could be because past experiences with reporting did not lead to timely actions,” said the ministry.
A report by the Infocomm Media Development Authority (IMDA) last year noted that designated social media services should improve the effectiveness and timeliness of their responses to user reports.
According to the report, most of them took an average of five days or more to act on user reports of harmful content that violated their community guidelines, which was "considerably longer" than what was stated in their annual reports.
About two in three respondents of the Smart Nation Policy Perception Survey were supportive of stronger regulation to protect users from online harms, even if it resulted in less freedom for users in the online space, said MDDI.
“This shows that a clear majority accept that some regulatory restraints are necessary to protect Singaporeans from online harms like online scams, inappropriate content and misinformation,” said the ministry.
The government, industry and community have been working together to build a safer online environment, said MDDI.
“We will also work with platforms to enhance their safety measures to provide a safe experience for all users,” it added.
The ministry noted that the government has taken several legislative steps to protect Singaporeans from online harms.
These include the introduction of a code of practice in July 2023, which requires designated social media services to set up systems and processes to prevent Singapore users, particularly children, from accessing harmful content.
IMDA’s Code of Practice for Online Safety – App Distribution Services also came into effect in March 2025, said MDDI.
It requires designated app stores to minimise users’ risk of exposure to harmful content, especially children.
These app stores must also put in place age assurance measures to prevent young users under 18 years old from accessing age-inappropriate apps. The measures will be implemented by Mar 31 next year.
There are also plans to establish a new Online Safety Commission by the first half of 2026, which will enable victims to get timely help if they encounter online harms and hold perpetrators accountable. Minister for Digital Development and Information Josephine Teo announced earlier this year that the Bill would be tabled this year.
MDDI added that it would improve public education and outreach to make online safety resources more accessible, practical and action-oriented. This will help people better recognise online risks and cultivate safer digital habits.
“In collaboration with community, industry and corporate partners, more ground-up initiatives such as workshops and webinars will be organised to educate the public on relevant topics,” it said.
Continue reading...
Results from the Perceptions of Digitalisation Survey and Smart Nation Policy Perception Survey, released on Friday (Oct 10), revealed that content supporting illegal activity, such as scams, was the most frequently encountered harmful content.
This was followed by sexual content, violent content, cyberbullying and content causing racial or religious tension.
Frequently encountered harmful online content referred to content that participants came across at least a few times a month.

Fifty-seven per cent of respondents reported encountering harmful content on Facebook, while 46 per cent said they came across such content on YouTube.
Forty-one per cent of participants also indicated that they encountered harmful content on Instagram.
About 36 per cent came across such content on TikTok, 15 per cent on X and 13 per cent on Reddit.
For messaging apps, 38 per cent said they encountered harmful content on WhatsApp, while 22 per cent indicated encountering it on Telegram.
“While this may be a result of the relative reach of each platform, the prevalence across all platforms is cause for concern,” said MDDI in a press release on Friday.
The Perceptions of Digitalisation Survey was conducted from November 2024 to February 2025, while the Smart Nation Policy Perception Survey was carried out from March to May this year.
Each survey involved 2,008 Singaporeans and permanent residents aged 15 and above.
The samples for both surveys were representative of the resident population by gender, age and race, said MDDI.
Also read:

CATFISHING MOST COMMON HARMFUL ONLINE BEHAVIOUR
According to the findings, about one in three respondents experienced harmful online behaviour.
Catfishing was the most common type of harmful online behaviour that respondents encountered, with most coming across it on WhatsApp and Facebook.
Catfishing is when someone creates a fake online identity to deceive others into a romantic relationship.
Seventy-one per cent of respondents said they had experienced catfishing, while 27 per cent reported experiencing unwanted sexual messages.
Online harassment was the third most common type of harmful online behaviour reported, with 16 per cent of respondents stating that they had experienced it.
Other common types of harmful behaviour included identity theft, offensive remarks and unwanted violent messages.
Most respondents who encountered harmful online behaviour blocked the user, and about 46 per cent of the participants reported the content or user to the platform.
However, when it came to harmful online content, participants tended not to report the content or the user to the platform.
More than four in five respondents skipped or closed the content, and 23 per cent of participants took no action.
“Such inaction could be because past experiences with reporting did not lead to timely actions,” said the ministry.
A report by the Infocomm Media Development Authority (IMDA) last year noted that designated social media services should improve the effectiveness and timeliness of their responses to user reports.
According to the report, most of them took an average of five days or more to act on user reports of harmful content that violated their community guidelines, which was "considerably longer" than what was stated in their annual reports.
Related:


CALL FOR STRONGER REGULATION
About two in three respondents of the Smart Nation Policy Perception Survey were supportive of stronger regulation to protect users from online harms, even if it resulted in less freedom for users in the online space, said MDDI.
“This shows that a clear majority accept that some regulatory restraints are necessary to protect Singaporeans from online harms like online scams, inappropriate content and misinformation,” said the ministry.
The government, industry and community have been working together to build a safer online environment, said MDDI.
“We will also work with platforms to enhance their safety measures to provide a safe experience for all users,” it added.
The ministry noted that the government has taken several legislative steps to protect Singaporeans from online harms.
These include the introduction of a code of practice in July 2023, which requires designated social media services to set up systems and processes to prevent Singapore users, particularly children, from accessing harmful content.
IMDA’s Code of Practice for Online Safety – App Distribution Services also came into effect in March 2025, said MDDI.
It requires designated app stores to minimise users’ risk of exposure to harmful content, especially children.
These app stores must also put in place age assurance measures to prevent young users under 18 years old from accessing age-inappropriate apps. The measures will be implemented by Mar 31 next year.
There are also plans to establish a new Online Safety Commission by the first half of 2026, which will enable victims to get timely help if they encounter online harms and hold perpetrators accountable. Minister for Digital Development and Information Josephine Teo announced earlier this year that the Bill would be tabled this year.
MDDI added that it would improve public education and outreach to make online safety resources more accessible, practical and action-oriented. This will help people better recognise online risks and cultivate safer digital habits.
“In collaboration with community, industry and corporate partners, more ground-up initiatives such as workshops and webinars will be organised to educate the public on relevant topics,” it said.
Continue reading...