As Bangladesh approaches its next general election, political campaigns have taken an unprecedented turn with the widespread use of artificially generated content.
Synthetic videos featuring fictional voters from various backgrounds endorsing political parties are dominating social media platforms, raising serious concerns about electoral integrity and digital ethics, according to research conducted by DismisLabs.
The trend emerged in mid-June 2025 when supporters of Bangladesh Jamaat-e-Islami began posting AI-generated clips on Facebook. These videos, created using Google’s Veo 3 text-to-video tool, depicted fabricated interviews with supposed voters – Hindu women, rickshaw pullers, professionals, and labourers – all declaring support for Jamaat.
Though the party claimed these were grassroots initiatives, not officially sanctioned, senior leaders actively shared them.
Soon, rival parties joined the trend. Supporters of the Bangladesh Nationalist Party (BNP), National Citizen Party (NCP), and even the temporarily suspended Awami League (AL) produced similar AI content – some promoting their own candidates, others attacking opponents.
Dismislab’s analysis of 70 such videos found they collectively amassed over 23 million views and one million reactions, demonstrating their extensive reach across social media platforms.
Strategic messaging & identity politics
Jamaat’s AI campaign stood out for its broad representation across religious and socio-economic lines. Clips showed sindoor-wearing Hindu women, hijab-clad Muslims, and professionals all endorsing the party – a deliberate rebranding effort for a group historically associated with conservative Islamist politics.
Some videos framed Jamaat as the only party caring for families affected by last year’s mass uprising, while others invoked religious rhetoric, declaring, “Jamaat is selling tickets to heaven.”
BNP and NCP supporters countered with their own AI-generated narratives. One viral clip featured a young voter condemning Jamaat as “anti-sovereign,” while another mocked religious politics, saying, “Don’t vote for merchants of religion.”
NCP backers positioned their party as a fresh alternative, with synthetic characters urging support for its “Shapla” (the water lily) symbol.
Meanwhile, AL-aligned pages – despite the party’s banned activities – circulated AI content calling for an election boycott, arguing that without their “boat” symbol, participation was meaningless.
Ethical concerns & regulatory gaps
Unlike deepfakes, which manipulate real individuals, these videos feature entirely fabricated personas – a tactic experts label “softfakes.”
Dr Rumman Chowdhury, an AI ethics specialist, warns that such content is “more dangerous than deepfakes because viewers are less on guard.”
Despite Meta and TikTok’s policies requiring AI labels, none of the analysed videos disclosed their synthetic origins. While TikTok automatically flagged some, Facebook’s detection systems failed entirely.
This lack of transparency leaves voters vulnerable to manufactured narratives.
Bangladesh currently lacks regulations governing AI in elections, unlike the US and EU, where disclosure laws and bans on deceptive AI content exist.
Analyst Fahmidul Haq stresses the urgent need for policies mandating clear labeling and forensic tools to detect synthetic media.
A new era of digital campaigning
The 2026 pre-election cycle marks a dramatic shift from 2024, when AI disinformation was sporadic. Now, with tools like Veo 3 enabling high-quality, low-cost fabrication, synthetic campaigns are proliferating unchecked.
As Tropa Majumdar, a media expert, notes, “These videos are pure fiction – the people don’t even exist. That makes them unethical, be it in marketing or politics.”
Without immediate action, Bangladesh’s electoral discourse risks being overrun by artificially manufactured consent.