SINGAPORE: Terrorism presents a persistent danger to Singapore. The latest Singapore Terrorism Threat Assessment Report, released on Tuesday (Jul 29), attributed this threat to a “volatile global landscape” fuelled by global developments such as the Gaza conflict and a growing range of extremist ideologies, some of which have already manifested in Singapore.
The continued risk should not come as a surprise, nor should the growing influence of technology and digital platforms in enabling terror threats. But what stood out in this year’s report was the emergence of innovations in artificial intelligence as a factor in the growing complexity of the local and global terrorist threat.
Not that this was surprising either, given the rapid development and adoption of AI. But we are only beginning to see the impact of AI on everything from jobs and education to the way we form human connections – and its emerging role in terrorism is concerning.
The Internal Security Department (ISD) said that new technology like AI and 3D printing have thus far not been proven to have been used in any terror attack plot in Singapore, but they have featured in the self-radicalisation of local youths.
A 17-year-old Islamic State supporter detained in September 2024, for example had used an AI chatbot to generate a bai’ah or pledge of allegiance to ISIS, as well as a declaration of armed jihad against non-Muslims to inspire other Muslims in Singapore to engage in armed violence.
In another case, a 17-year-old supporter of far-right extremist ideologies detained in March this year had searched for instructions on an AI chatbot on how to produce ammunition, and considered 3D-printing his own firearms as part of his plans to conduct attacks locally.
ISD said that, given the increased sophistication and ease of access to these and other emerging technologies, potential terror activities and plots in the future can be expected to feature such technologies going forward.
There are a number of ways that new and evolving technology like AI can be harnessed by terrorist organisations to advance their aims.
First, in accelerating radicalisation and recruitment. AI has been used to generate disinformation at a scale and speed much faster than what could be achieved by manual means alone.
Many extremists misappropriate, misinterpret or manipulate religious prophecies or social narratives to suit their ideologies and win more supporters to their cause. In that vein, AI would be able to make such efforts, which bear significant similarities to disinformation operations, more effective and efficient.
AI-powered chatbots tap into our human need for companionship, but have also been shown to be easily manipulated. Isolated individuals may find seemingly empathetic connections and be nudged toward extremist ideology.
Another risk is how AI could be deployed by terror organisations in ever more innovative ways to create chaos and confusion.
A recent investigation by the BBC unearthed evidence that identities of British public sector workers had been cloned using AI by a Russian-linked disinformation campaign. For example, an emergency medical adviser in the UK had his voice faked in a video campaign spreading fear ahead of Poland’s presidential election earlier this year.
In the recent Israel-Iran conflict, AI-driven disinformation such as fake videos and images showing the purported prowess and success of both sides were shared widely in the online space, with the aim of bolstering support for each side’s campaign.
Third, AI could be used by radicalised individuals to generate and suggest plans and strategies to conduct terror attacks, which would be especially appealing to younger, more vulnerable individuals who may not have been exposed to formal training or direct contact with terror organisations.
Combined with existing trends such as using AI to produce deepfake videos, and the other tactics detailed in the report, these present a complex and complicated environment that can only increase the chances of terrorism threats permeating across the world.
While the report makes clear that there is currently no specific intelligence of an imminent terrorist attack against Singapore, it is important to not take such relative peace for granted.
All eight Singaporeans dealt with under the country’s Internal Security Act since last July were self-radicalised by extremist materials they found online. ISD said that such platforms have accelerated the time taken between individuals’ first encounter with terrorist and radical elements, and their subscription to violent extremist ideologies.
Statistics show that the time taken for self-radicalisation for cases detected in Singapore has essentially halved from 24 months prior to 2015, to an average of 12 months between 2021 and 2025. Some cases took mere weeks.
In particular, the vulnerability of the youth to radicalisation remains a pressing concern.
Much of ISD’s prescribed efforts to counter the risk of terrorism and violent extremism are focused on enhancing counter-terrorism capabilities, beefing up border security, and working with social and education agencies to address the threat early.
However, given the potential for emerging and evolving technologies such as AI to be used by terrorist organisations to enable and enhance their efforts and operations, equal emphasis should be placed on identifying such threats as early as possible.
This will require close collaboration between the government and security agencies and technology counterparts in the private sector, while also ensuring the people are educated and kept up to date on such threats as soon as they are detected, whether at home or abroad.
Nicholas Fang is managing director of Black Dot Research, a market research consultancy which operates a leading independent fact-checking platform. A former Nominated Member of Parliament, he is also director for security and global affairs at the Singapore Institute of International Affairs.
Continue reading...
The continued risk should not come as a surprise, nor should the growing influence of technology and digital platforms in enabling terror threats. But what stood out in this year’s report was the emergence of innovations in artificial intelligence as a factor in the growing complexity of the local and global terrorist threat.
Not that this was surprising either, given the rapid development and adoption of AI. But we are only beginning to see the impact of AI on everything from jobs and education to the way we form human connections – and its emerging role in terrorism is concerning.
The Internal Security Department (ISD) said that new technology like AI and 3D printing have thus far not been proven to have been used in any terror attack plot in Singapore, but they have featured in the self-radicalisation of local youths.
A 17-year-old Islamic State supporter detained in September 2024, for example had used an AI chatbot to generate a bai’ah or pledge of allegiance to ISIS, as well as a declaration of armed jihad against non-Muslims to inspire other Muslims in Singapore to engage in armed violence.
In another case, a 17-year-old supporter of far-right extremist ideologies detained in March this year had searched for instructions on an AI chatbot on how to produce ammunition, and considered 3D-printing his own firearms as part of his plans to conduct attacks locally.
ISD said that, given the increased sophistication and ease of access to these and other emerging technologies, potential terror activities and plots in the future can be expected to feature such technologies going forward.
EVOLVING TACTICS AND TECHNIQUES
There are a number of ways that new and evolving technology like AI can be harnessed by terrorist organisations to advance their aims.
First, in accelerating radicalisation and recruitment. AI has been used to generate disinformation at a scale and speed much faster than what could be achieved by manual means alone.
Many extremists misappropriate, misinterpret or manipulate religious prophecies or social narratives to suit their ideologies and win more supporters to their cause. In that vein, AI would be able to make such efforts, which bear significant similarities to disinformation operations, more effective and efficient.
AI-powered chatbots tap into our human need for companionship, but have also been shown to be easily manipulated. Isolated individuals may find seemingly empathetic connections and be nudged toward extremist ideology.
Another risk is how AI could be deployed by terror organisations in ever more innovative ways to create chaos and confusion.
A recent investigation by the BBC unearthed evidence that identities of British public sector workers had been cloned using AI by a Russian-linked disinformation campaign. For example, an emergency medical adviser in the UK had his voice faked in a video campaign spreading fear ahead of Poland’s presidential election earlier this year.
In the recent Israel-Iran conflict, AI-driven disinformation such as fake videos and images showing the purported prowess and success of both sides were shared widely in the online space, with the aim of bolstering support for each side’s campaign.
Related:


Third, AI could be used by radicalised individuals to generate and suggest plans and strategies to conduct terror attacks, which would be especially appealing to younger, more vulnerable individuals who may not have been exposed to formal training or direct contact with terror organisations.
Combined with existing trends such as using AI to produce deepfake videos, and the other tactics detailed in the report, these present a complex and complicated environment that can only increase the chances of terrorism threats permeating across the world.
WHAT CAN BE DONE
While the report makes clear that there is currently no specific intelligence of an imminent terrorist attack against Singapore, it is important to not take such relative peace for granted.
All eight Singaporeans dealt with under the country’s Internal Security Act since last July were self-radicalised by extremist materials they found online. ISD said that such platforms have accelerated the time taken between individuals’ first encounter with terrorist and radical elements, and their subscription to violent extremist ideologies.
Statistics show that the time taken for self-radicalisation for cases detected in Singapore has essentially halved from 24 months prior to 2015, to an average of 12 months between 2021 and 2025. Some cases took mere weeks.
Related:


In particular, the vulnerability of the youth to radicalisation remains a pressing concern.
Much of ISD’s prescribed efforts to counter the risk of terrorism and violent extremism are focused on enhancing counter-terrorism capabilities, beefing up border security, and working with social and education agencies to address the threat early.
However, given the potential for emerging and evolving technologies such as AI to be used by terrorist organisations to enable and enhance their efforts and operations, equal emphasis should be placed on identifying such threats as early as possible.
This will require close collaboration between the government and security agencies and technology counterparts in the private sector, while also ensuring the people are educated and kept up to date on such threats as soon as they are detected, whether at home or abroad.
Nicholas Fang is managing director of Black Dot Research, a market research consultancy which operates a leading independent fact-checking platform. A former Nominated Member of Parliament, he is also director for security and global affairs at the Singapore Institute of International Affairs.
Continue reading...