SINGAPORE: During the 2024 Indonesia presidential election, AI reanimated former President Suharto, more than two decades after his death, to endorse the Golkar party. When India held national elections last year, manipulated videos were used to target politicians during campaigning. In the United States, some voters received a robocall that sounded eerily like then President Joe Biden urging them not to vote in the primary, later confirmed to be a deepfake.
These incidents demonstrate how AI-generated media can weaponise memory, identity and emotion in politics. But they also highlight a deeper democratic dilemma: If citizens can no longer trust what they see or hear, the very foundation of collective decision-making begins to erode.
Scholars and researchers warn that deepfakes could supercharge polarisation, push voters into echo chambers, and even delegitimise electoral outcomes. Some policymakers have gone so far as to call synthetic media an existential threat to democracy.
But do deepfakes inflame polarisation as much as we think? A new published study by my team, based on India’s digitally engaged electorate, suggests a more complicated story.
Deepfakes are unique among digital tools because they don’t just distort facts. They manipulate identity itself.
Scholars like Danielle Citron and Bobby Chesney have warned that public trust could be corroded when it becomes impossible to distinguish the authentic from the artificial, what they coined the “liar’s dividend”.
Democracies depend on a shared understanding of truth so that citizens can participate effectively in civic life and hold their leaders accountable. When that understanding erodes, political actors can dismiss inconvenient facts as fabrications, or worse, use synthetic content to attack rivals.
With online platforms wired to reward outrage and emotional engagement, deepfakes risk accelerating political divisions.
But the effects of deepfakes are not uniform. Like all political communication, they filter through voters’ identities, beliefs and expectations. To understand how these dynamics play out in practice, we turned to India.

India is a critical test case: It is the world’s largest democracy, with 969 million eligible voters. Its politics are multiparty and coalition-driven, making questions of polarisation particularly complex. And it is also a country where digital campaigning has exploded, with parties embracing WhatsApp networks, AI tools and, increasingly, synthetic media.
In our experimental study, more than 1,600 respondents were exposed to political news stories. Some read about an opposing party’s campaigns using deepfakes, others about their own party’s campaigns doing so, while a control group read unrelated political news.
What we measured was affective polarisation – how warmly or coldly voters felt toward coalitions, parties and leaders after being exposed one of these experimental conditions.
The results were striking. First, exposure to deepfakes alone did not increase polarisation. Respondents who read about the opposing party’s use of deepfakes did not become significantly more hostile toward rivals than those in the control group. This finding challenges the common assumption that deepfakes automatically inflame partisan divides.
Second, voters’ approval of AI use mattered. Those who expressed higher approval of candidates using AI in campaigning showed lower levels of polarisation across coalitions, parties and leaders. In other words, far from intensifying divides, being more receptive to AI use correlated with a depolarising effect.
Third, demographic differences surfaced. Women consistently reported lower levels of polarisation than men, while older voters showed slightly reduced hostility toward rivals compared to younger ones. These results suggest that gender and age are key, and currently overlooked, variables in understanding how AI reshapes political attitudes.
People stand in a queue to cast their votes for the capital’s state legislature election at a polling booth in New Delhi, India, Wednesday, Feb. 5, 2025. (AP Photo)
What, then, should we make of deepfakes in politics? The danger may lie more in the gradual erosion of institutional trust.
If deepfakes become normalised, citizens may come to expect manipulation as standard practice. This could lower the threshold for deception in politics, even if voters do not immediately become polarised in response to every fake video.
Moreover, approval of AI use raises a paradox. On one hand, it seems to blunt affective polarisation, suggesting that voters who accept technological innovation are less easily outraged. On the other hand, normalisation risks numbing citizens to ethical concerns.
If “everyone does it”, the bar for democratic accountability falls.
First, regulation is important in creating safeguards against the misuse of new technologies.
Singapore has already taken important steps through its Protection from Online Falsehoods and Manipulation Act (POFMA) and the more recent Elections (Integrity of Online Advertising) (Amendment) Bill 2024, which bans the creation or dissemination of deepfakes and other manipulated content that falsely represent candidates, parties, or their associations, or that may incite hostility among groups.
Yet as synthetic media become more accessible, even tightly regulated democracies will need clearer and enforceable norms on their use in campaigns.
The challenge will not only be to prevent harm, but to build trust and develop systems that help citizens know what is real and who is accountable. It will be to find a way for digital innovation to coexist with democratic integrity in an age where seeing is no longer believing.
Second, promoting media literacy can benefit both citizens and societies. If approval of AI shapes responses, then equipping voters with the tools to critically evaluate synthetic media is crucial. Schools, civil society, and platforms themselves all have a role.
Finally, more research is needed to unpack the evolving social dimensions of AI. The gender and age differences uncovered in our study point to deeper questions about how trust, technology, and partisanship intersect in digital societies.
Comparative research across Southeast Asia can shed light on how cultural and institutional factors shape public responses to AI-driven media.
Deepfakes are not destiny. Their impact depends on political culture, voter attitudes and institutional safeguards.
Our study in India shows that while deepfakes can be unsettling, their effects on polarisation are neither automatic nor uniform. The real threat lies in how societies respond – whether we allow synthetic media to corrode trust, or whether we strengthen democratic resilience against it.
In the age of AI, the challenge is not only to distinguish truth from fabrication but to ensure that democracy itself remains believable.
Taberez Ahmed Neyazi is Associate Professor and Principal Investigator at the Centre for Trusted Internet & Community, and Director of Digital Campaigns and Propaganda Lab, National University of Singapore
Continue reading...
These incidents demonstrate how AI-generated media can weaponise memory, identity and emotion in politics. But they also highlight a deeper democratic dilemma: If citizens can no longer trust what they see or hear, the very foundation of collective decision-making begins to erode.
Scholars and researchers warn that deepfakes could supercharge polarisation, push voters into echo chambers, and even delegitimise electoral outcomes. Some policymakers have gone so far as to call synthetic media an existential threat to democracy.
But do deepfakes inflame polarisation as much as we think? A new published study by my team, based on India’s digitally engaged electorate, suggests a more complicated story.
ERODING OUR SHARED UNDERSTANDING OF TRUTH
Deepfakes are unique among digital tools because they don’t just distort facts. They manipulate identity itself.
Scholars like Danielle Citron and Bobby Chesney have warned that public trust could be corroded when it becomes impossible to distinguish the authentic from the artificial, what they coined the “liar’s dividend”.
Democracies depend on a shared understanding of truth so that citizens can participate effectively in civic life and hold their leaders accountable. When that understanding erodes, political actors can dismiss inconvenient facts as fabrications, or worse, use synthetic content to attack rivals.
With online platforms wired to reward outrage and emotional engagement, deepfakes risk accelerating political divisions.
But the effects of deepfakes are not uniform. Like all political communication, they filter through voters’ identities, beliefs and expectations. To understand how these dynamics play out in practice, we turned to India.
Related:

CHALLENGING ASSUMPTIONS
India is a critical test case: It is the world’s largest democracy, with 969 million eligible voters. Its politics are multiparty and coalition-driven, making questions of polarisation particularly complex. And it is also a country where digital campaigning has exploded, with parties embracing WhatsApp networks, AI tools and, increasingly, synthetic media.
In our experimental study, more than 1,600 respondents were exposed to political news stories. Some read about an opposing party’s campaigns using deepfakes, others about their own party’s campaigns doing so, while a control group read unrelated political news.
What we measured was affective polarisation – how warmly or coldly voters felt toward coalitions, parties and leaders after being exposed one of these experimental conditions.
The results were striking. First, exposure to deepfakes alone did not increase polarisation. Respondents who read about the opposing party’s use of deepfakes did not become significantly more hostile toward rivals than those in the control group. This finding challenges the common assumption that deepfakes automatically inflame partisan divides.
Second, voters’ approval of AI use mattered. Those who expressed higher approval of candidates using AI in campaigning showed lower levels of polarisation across coalitions, parties and leaders. In other words, far from intensifying divides, being more receptive to AI use correlated with a depolarising effect.
Third, demographic differences surfaced. Women consistently reported lower levels of polarisation than men, while older voters showed slightly reduced hostility toward rivals compared to younger ones. These results suggest that gender and age are key, and currently overlooked, variables in understanding how AI reshapes political attitudes.
People stand in a queue to cast their votes for the capital’s state legislature election at a polling booth in New Delhi, India, Wednesday, Feb. 5, 2025. (AP Photo)
THE BAR FOR ACCOUNTABILITY FALLS
What, then, should we make of deepfakes in politics? The danger may lie more in the gradual erosion of institutional trust.
If deepfakes become normalised, citizens may come to expect manipulation as standard practice. This could lower the threshold for deception in politics, even if voters do not immediately become polarised in response to every fake video.
Moreover, approval of AI use raises a paradox. On one hand, it seems to blunt affective polarisation, suggesting that voters who accept technological innovation are less easily outraged. On the other hand, normalisation risks numbing citizens to ethical concerns.
If “everyone does it”, the bar for democratic accountability falls.
WHERE DO WE GO FROM HERE?
First, regulation is important in creating safeguards against the misuse of new technologies.
Singapore has already taken important steps through its Protection from Online Falsehoods and Manipulation Act (POFMA) and the more recent Elections (Integrity of Online Advertising) (Amendment) Bill 2024, which bans the creation or dissemination of deepfakes and other manipulated content that falsely represent candidates, parties, or their associations, or that may incite hostility among groups.
Related:
Yet as synthetic media become more accessible, even tightly regulated democracies will need clearer and enforceable norms on their use in campaigns.
The challenge will not only be to prevent harm, but to build trust and develop systems that help citizens know what is real and who is accountable. It will be to find a way for digital innovation to coexist with democratic integrity in an age where seeing is no longer believing.
Second, promoting media literacy can benefit both citizens and societies. If approval of AI shapes responses, then equipping voters with the tools to critically evaluate synthetic media is crucial. Schools, civil society, and platforms themselves all have a role.
Finally, more research is needed to unpack the evolving social dimensions of AI. The gender and age differences uncovered in our study point to deeper questions about how trust, technology, and partisanship intersect in digital societies.
Comparative research across Southeast Asia can shed light on how cultural and institutional factors shape public responses to AI-driven media.
HOW SOCIETIES RESPOND
Deepfakes are not destiny. Their impact depends on political culture, voter attitudes and institutional safeguards.
Our study in India shows that while deepfakes can be unsettling, their effects on polarisation are neither automatic nor uniform. The real threat lies in how societies respond – whether we allow synthetic media to corrode trust, or whether we strengthen democratic resilience against it.
In the age of AI, the challenge is not only to distinguish truth from fabrication but to ensure that democracy itself remains believable.
Taberez Ahmed Neyazi is Associate Professor and Principal Investigator at the Centre for Trusted Internet & Community, and Director of Digital Campaigns and Propaganda Lab, National University of Singapore
Continue reading...
