
Mainstream media houses and their digital affiliates must protect citizens and preserve the integrity of published information
Uganda’s election year 2025-2026 is here and the country’s legacy or mainstream media houses and their digital affiliates should either have or be drawing up strategies about how their coverage will maximise audience attention, make them money, and keep them out of trouble. Among the notable troubles media houses must strategise against is the likely avalanche of disinformation they will likely encounter.
Disinformation, which is false information intentionally shared to cause harm, and misinformation, which is false information shared with no intention of causing harm, have the potential to mislead citizens and impact or disrupt an election. Equally dangerous is their cousin; mal-information, which is true information intentionally shared to mislead or cause harm.
It is the duty of the country’s legacy or mainstream media houses and their digital affiliates to protect citizens and preserve the integrity of published information from this tripartite trilemma during an election.
Disinformation will feature prominently in the 2026 Uganda elections, according to many observers. Africa Up Close, an American blog of the Wilson Centre that covers US-Africa issues, says disinformation will enable new electoral dynamics and political messaging in Uganda driven by a majority young generation and innovative engagement strategies using social media, memes, and viral content.
Detecting, disarming, and dismantling disinformation in the 2025-26 elections is critical because, in the last general election in 2021, Ugandan media houses were bombarded with tonnes of disinformation which some dutifully published on their platforms to their embarrassment and occasional peril.
Many media houses only became aware of the mass disinformation campaign when Digital Forensic Lab (DFR Lab), a research node affiliated to the Washington D.C-based Atlantic Council, announced that it had noticed Uganda election-related Twitter (now X) Facebook accounts engaging in suspicious online behavior. Meta flags their activities as “coordinated inauthentic behaviour”. These are activities that are deceptive and aimed at distorting public discourse.
DFR Lab revealed that the disinformation agents, acting on their own behalf or for some candidates and campaign managers were using fake and duplicate Facebook and Twitter accounts to manage pages, comment on other people’s content, impersonate users, and re-share posts in groups to make them appear more popular than they actually were. The accounts were used either to smear candidates or to respond to negative content or promote a candidate. DFR Lab said public relations firms that were part of the network had a combined following of over 10,000 accounts. Meta and Twitter disabled 32 Facebook pages, 220 user accounts, 59 groups and 139 Instragram profiles.
Media houses were also unprepared when on January 13, 2021, the Uganda Communications Commission (UCC), the national communications regulator, banned and disabled access to Meta’s platforms: Facebook, Instagram, Snapchat, and YouTube.
As happened in the last election, the run-up to and aftermath of the 2026 general election is expected to generate manipulative content intended to do harm.
In the 2021 election, the disinformation centred mainly on distributive technologies such as Facebook and Twitter. In 2026 the disinformation campaigns are likely to be more sophisticated and robust and combine both generative and distributive manipulation. This is mainly because of advancements over the last three in the technology for generating and circulating content.
According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” Artificial Intelligence (AI) has the potential to improve the efficiency and accuracy of elections but it also poses multiple risks.
In the 2026 election, generative AI-based deep fake technology that can write or rewrite text, and produce audio, video and imagery that appear to be from trusted sources could be used. This poses greater potential to be shared intentionally to mislead as it can potentially quickly go viral via small media houses that struggle with limited time, finances, and human resources. These media houses cannot deploy enough resources to cover an election and inadvertently sneak unverified, aggregated or curated content into their election coverage.
Research by Sumsub, a leading identity and document verification company operating in 220 countries and territories, shows a 245% year-over-year (YoY) increase in deepfakes worldwide; especially higher in countries going into elections, including the U.S, India, Mexico, Indonesia, and South Africa. Pavel Goldman-Kalaydin, Head of AI/ML at Sumsub, says media platforms must stay vigilant, remaining aware and updating defenses to detect deepfakes and prevent AI-generated fraud. They need to make sure they are not inadvertently contributing to the spread of misinformation.
The deepfake AI market was estimated at $562.8 million in 2023, expected to reach $764.8 million by 2024, and is forecasted to grow at an impressive compound annual growth rate (CAGR) of 41.5% through 2030, reaching $6.14 billion, according to a report titled “A Survey on Deepfake Technologies” presented at the Africa Law Tech Festival 2024 in Nairobi, Kenya.
Although live deepfake technology remains largely out of reach in Uganda, there are common open-source technologies that can morph or swap a face in an image and others that can create fake high realistic audio impersonation. Other technologies such as CGPT-3 can use natural language processing (NLP) and natural language generation (NLG) techniques to write propaganda or fake news articles.
That means that during any election, media houses and fact-checkers must plan for and be ready to confront such disinformation, misinformation, and mal-information content. It could be peddled by individuals, political organisations, electoral bodies, civil society, foreign states and agents, and the government. What motivates the purveyors of disreputable information may differ but the resolve of any reputable media organisation to frustrate their intent must be unshakable.