• If Laksaboy Forums appears down for you, you can google for "Laksaboy" as it will always be updated with the current URL.

    Due to MDA website filtering, please update your bookmark to https://laksaboyforum.xyz

    1. For any advertising enqueries or technical difficulties (e.g. registration or account issues), please send us a Private Message or contact us via our Contact Form and we will reply to you promptly.

Is Netanyahu dead? Figuring out misinformation and how to counter the damage

LaksaNews

Myth
Member
SINGAPORE: After Israeli Prime Minister Benjamin Netanyahu filmed an address to the Israeli people and posted it online on Mar 13, many netizens claimed they spotted a sixth finger on one hand.

The allegations that the video was generated by artificial intelligence soon escalated to rumours that Netanyahu had died.

Two days later, the Israeli prime minister posted another video of himself visiting a cafe to dispel online rumours of his death. Despite his efforts, the gossip only grew.

Misinformation often spreads faster than the truth, experts told CNA, with political falsehoods among the fastest categories.

CNA Games
Show More Show Less
Most successful misinformation campaigns often trigger strong emotions like fear, anger or shock, prompting users to share quickly without thinking about the content.

This, in turn, makes them even more difficult to counter, they said.

These misinformation campaigns usually emerge when there is a vacuum of trusted information and intense public demand for updates, said Associate Professor Gao Wei from Singapore Management University’s (SMU) computing school.

“In a crisis, people are looking for immediate explanations, and the first dramatic story often travels before the first verified one,” he added.

Revealing the truth is slower, he said. "Verification takes time, evidence, trusted sources and coordination. A false claim only needs to be interesting enough to share.”


On Tuesday, the Israeli prime minister posted a photo on X of himself ordering the elimination of senior Iranian officials – and the replies on the social media platform were full of users claiming it was an old photo or one generated by AI.

“Please look closely,” says one reply, attaching a picture with multiple areas of the photo circled, alleging they found clues that indicate the photo is fake.

Misinformation refers to false information that is produced without the intention of causing harm, while disinformation refers to false information that is intentionally produced to cause damage.

PLAYING TO BIASES​


Being exposed to the same falsehood multiple times from different sources can make the fake claim very convincing, said Professor Edson Tandoc, the director of Nanyang Technological University’s (NTU) Centre for Information Integrity and the Internet.

Appealing to preconceptions or biases that people have also works, he added.

“Playing into people’s confirmation bias is a recipe for deception. Playing into people’s emotions is another ploy,” he said.

Misinformation has to be corrected immediately before it goes viral, said Prof Tandoc. The longer the lag between the spread of fake news and the correction, the higher the risk that more people will be exposed to the wrong information.

Once the false claim is repeated many times and people publicly commit to it, it becomes much harder to change minds, said Professor Lee Mong Li, the director at National University of Singapore’s (NUS) Centre of Trusted Internet and Community.

At this stage, individuals often reject corrections to avoid cognitive dissonance, which is why changing their minds is more difficult, added Professor Wynne Hsu, who heads NUS’ Institute of Data Science.

GENERATIVE AI​


Experts CNA spoke to were unsurprised about the rumours of Netanyahu’s death, and warned that generative AI has made misinformation campaigns far easier and more convincing.

In this case, the rumours also fed on wartime uncertainty, limited public appearances by Netanyahu and viral claims about AI-manipulated footage, said Associate Professor Saifuddin Ahmed from NTU’s Wee Kim Wee School of Communication and Information.

Distinguishing AI-manipulated footage or deepfakes created by generative AI is becoming difficult for everyday users, he added.

Also read:​



For now, individuals should verify the source, look out for facial glitches, odd lighting or unnatural audio, and see whether credible news outlets or accounts confirm it, said Assoc Prof Saifuddin.

“If a dramatic video appears only on random social accounts, that’s a red flag,” he added.

NTU’s Prof Tandoc said: “We were all expecting that physical, traditional combat would be complemented by disinformation warfare – we’ve seen that in previous conflicts as well.

“It was just a matter of time.”

While it is still possible to verify the videos or check if they were made with generative AI, the other problem is whether people want to check if they are real, or have the skills and time to do so, he added.

With the volume of messages and images about the conflict online, people may not have the time to verify each video they see, said Prof Tandoc.

In this situation, they just rely on their biases, he added. “If this video supports what I believe in, then I want it to be true, then it must be true.”

Generative AI has made spreading misinformation easier by reducing production costs, said Assistant Professor Ke Ping Fan from SMU’s computing school.

Anyone can generate text, images, videos or audio in multiple languages without much effort, he said, adding that deepfakes can be nearly indistinguishable from reality.

“Even if the quality of deepfakes is not good enough, they can be used to fuel rumours by prompting debates,” he said.

People typically check for AI-generated videos by looking for technical inconsistencies such as unnatural lip-syncing or metadata like watermarks, said Asst Prof Ke, noting that these can be less obvious or removed in more deliberate and sophisticated videos.

“A better approach is to verify the source and chain of custody – asking where the video originated and whether reputable news organisations have verified it,” he added.

LEGAL LEVERS​


Some countries have laws that empower governments to act against disinformation.

Dr Carol Soon, the deputy head of NUS’ communications and new media department, noted that Singapore and Australia have such laws in place.

While mandating that the content be removed may mitigate the spread of false claims, the reality is that some people would have already seen the disinformation and are sharing it, said Dr Soon.

Both upstream and downstream efforts are needed to combat disinformation. This could include community outreach to increase understanding and empathy between different communities, or the timely debunking and calling out of disinformation and its actors, she added.

The difficulty is that misinformation can often be corrected early, but it can rarely be entirely erased, said Dean for NUS College Simon Chesterman, who was previously dean of the university’s law faculty.

Legal tools such as Singapore’s Protection from Online Falsehoods and Manipulation Act can help by attaching corrections, and in more serious cases, restrict access, he added.

But no legal regime will eliminate misinformation completely, said Prof Chesterman, who is also senior director of AI Governance for AI Singapore.

“In the end, the most durable defence is public resilience: citizens who are neither so gullible that they believe everything, nor so cynical that they don’t believe anything,” he added.

Continue reading...
 
Back
Top