PoliticsTech

How AI-Generated Videos are Influencing Elections in Asia

As the world prepares for a busy election year in 2024, a new threat is emerging that could undermine the credibility and integrity of democracy: deepfakes. These are videos or audio clips that use artificial intelligence (AI) to manipulate or create the appearance or voice of a person, often a political leader or a celebrity.

In recent years, deepfakes have become more realistic, accessible and widespread, thanks to advances in AI technology and the availability of online tools and platforms. While some deepfakes are harmless or humorous, such as Instagram reels of Indian Prime Minister Narendra Modi singing in regional languages, others are malicious and misleading, such as TikTok videos of Indonesian presidential candidates Prabowo Subianto and Anies Baswedan speaking in fluent Arabic.

In countries like Pakistan, India, Indonesia and Bangladesh, where elections are due in 2024, deepfakes pose a serious challenge to the public’s trust and information literacy. These countries have large and diverse populations, low levels of digital and media literacy, and high rates of social media usage and misinformation. Deepfakes can exploit these vulnerabilities and influence voter perceptions and behaviour, especially if they are not detected and debunked in time.

According to DeepMedia, a company that develops tools to detect synthetic media, at least 500,000 video and voice deepfakes were shared on social media sites globally in 2023. Some of these deepfakes were used for political purposes, such as:

  • In the lead-up to Pakistan’s upcoming election on February 8, former Prime Minister Imran Khan, currently incarcerated on charges under the official secrets act, leveraged AI technology to create a digital clone of himself for an online rally in December. This event, which amassed over 1.4 million YouTube views and was attended live by tens of thousands, underscores the increasing role of synthetic media in political discourse. Despite Pakistan drafting an AI law, critics, including digital rights activists, have expressed concerns about the absence of safeguards against disinformation and the protection of vulnerable groups, particularly women. Nighat Dad, co-founder of the non-profit Digital Rights Foundation, emphasized the significant threat disinformation poses to elections and the democratic process in Pakistan, noting its past influence on voting behavior, party support, and even legislative changes. She warned that synthetic media could further exacerbate this issue.
  • In India, where more than 900 million people are eligible to vote, Modi has said deepfake videos are a “big concern”, and authorities have warned social media platforms they could lose their safe-harbour status that protects them from liability for third-party content posted on their sites if they do not act. In November 2023, a local election in Rajasthan was marred by allegations of deepfake videos being used by politicians to sway voters. Divyendra Singh Jadoun, the founder of The Indian Deepfaker, a company that makes AI-based visual effects and voice clones, said he was approached by several politicians to create deepfake videos for their campaigns, but he declined. He said the technology is so good now that people cannot tell if it is real or fake, and there are no guidelines on deepfakes in India.
  • In Indonesia, where more than 200 million voters will go to the polls on Feb 14, 2024, deepfakes of all three presidential candidates and their running mates are circulating online, and have the potential to influence election outcomes, said Nuurrianti Jalli, who studies misinformation on social media. She said that in environments where misinformation is already prevalent, AI-generated content can further skew public perception and influence voting behaviour. In June 2023, a video that appeared to show President Joko Widodo endorsing his rival Prabowo Subianto was widely shared on WhatsApp, but it was later revealed to be a deepfake. The Indonesian government has passed a law to combat online hoaxes and fake news, but it has also been criticised for restricting freedom of expression and targeting dissenting voices.
  • In Bangladesh, where Prime Minister Sheikh Hasina is set for her fourth straight term after polls on Jan 7, 2024, deepfake videos of female opposition politicians Rumin Farhana in a bikini and Nipun Roy in a swimming pool have emerged. While they were debunked quickly, they are still circulated, and even poor-quality deepfake content is misleading people, said Sayeed Al-Zaman, an assistant professor of journalism at Bangladesh’s Jahangirnagar University, who studies social media. He said that given the low levels of information and digital literacy in Bangladesh, deepfakes can be potent carriers of political propaganda if crafted and deployed effectively. But the government does not appear concerned, he added.

The problem of deepfakes is not limited to Asia. In other parts of the world, such as New Zealand, Turkey, Argentina and the US, deepfakes have also been used or feared to be used for political purposes, such as smearing opponents, spreading false narratives, or impersonating leaders. The US, which will hold its presidential election in November 2024, has been particularly alert to the threat of deepfakes, as it has already experienced the impact of misinformation and foreign interference in its previous elections.

Experts and activists have called for more efforts to combat deepfakes, such as developing better detection and verification tools, raising public awareness and education, establishing ethical and legal standards, and holding social media platforms accountable. They have also warned of the potential harms of deepfakes, such as eroding trust in institutions, undermining human rights, and destabilising peace and security.

As the technology of deepfakes becomes more sophisticated and accessible, the challenge of distinguishing between reality and illusion becomes more urgent and complex. In the era of deepfakes, voters need to be more vigilant and critical, and demand more transparency and accountability from their leaders and media. Otherwise, democracy may be at risk of being hijacked by AI.

Back to top button