Skip to main content

Managing the cost of disinformation

By Joseph Were

Why countering the threat requires media houses to inspire and innovate

There are signs that news gathering around the 2025-26 general election is likely to be fast-paced, social media-driven, dangerous, and packed with disinformation, misinformation and mal-information.

The disinformation around the 2021 general election in Uganda was big and it is likely to be bigger this time. Numbers regarding volume and value are scanty for Uganda, but according to the website Statista in 2020, taking into account the spread of fake-news in the financial, political and healthcare fields, the total cost of misinformation and the spread of fake news was $78 billion. This includes money spent on for-hire advertising, marketing and public relations companies dedicated to manipulation of online opinion.  

That means media houses will have to set-up dedicated disinformation desks and assign journalists to fact-check, get online and refute disinformation. As they do that, media houses must be aware that it is going to be costly. This is largely because the amount of energy needed to refute disinformation is an order of magnitude bigger than is needed to produce it, to paraphrase a famous line attributed to Alberto Brandolini, an Italian programmer.

According to Pyrra, an American organisation dedicated to making the internet, and the world, safer by protecting people from harm – online and off, before it switched from fact-checkers to community notes contributors, Facebook was paying big money. In 2017 it reportedly paid $100,000 to Snopes, which was among the first fact-checking organisations in the world. In 2018 it paid $406,000. It also paid FactCheck.org $188,881 in 2018 and $242,400 in 2019.

Fact-checking in Uganda is struggling to get a grip. Efforts by independent organisations such as the Debunk Media Initiative supplement work by for hire freelance fact checkers. That means serious media houses determined to make a difference will have to set up dedicated fact-checking desks. That will mostly mean adding overhead costs to already struggling entities.

Globally, many newspapers have had disinformation beats and fact-checking desks for some time. Factcheck.org was formed in 2003 in the newsroom of the then-St. Petersburg Times, in America. At that point, fact-checking had been around for about a decade. Today, the French international news agency, AFP, prides itself with developing the most extensive fact-checking network in the world. In 2024, it published 7, 536 fact-checks or about 20 every day.

When Duke Reporters’ Lab at the Sanford School of Public Policy at Duke University, USA, tracked the growth of dedicated fact-checking globally, it found that there were 151 organisations in 2015 and 424 in 2022. That’s a 180% jump in the number of dedicated fact-checking globally in seven years.

Fact-checking requires dedicated organisations, practicing professionals, tools of the trade, and codes of ethics, regulatory groups associations, and more. Fact-checkers must go through mountains of correct information, to ensure one falsehood does not slip through. Imagine watching video, after video, image after image, or reading text after text in a fast-paced environment. 

In the context of the 2026 election, the disinformation eco-system is likely to be robust. It will churn out fake photos, montages, videos, and stories to create false perception of the reality before voters. 

Disinformation will be created in backrooms and specialised disinformation labs. It will be published and shared on social media, websites, digital blogs and other platforms. It will go viral in memes, be adopted by mainstream media, circulated by internet influencers, and re-echoed by candidates and their campaign surrogates at rallies, debates, and interviews. 

Researchers from the MIT Sloan School of Management in 2018 found that false news spreads more rapidly on the social network X than real news does — and by a substantial margin. Equally, significantly, the scholars found, the spread of false information was not mainly by bots that are programmed to disseminate inaccurate stories. Instead, false news spreads faster around X due to people retweeting inaccurate news items.

A study by the University of Southern California in 2023 found that the biggest influencer in the spread of fake news is the social platforms’ structure of rewarding users for habitually sharing information. The researchers found that just 15% of the most habitual news sharers in the research were responsible for spreading about 30% to 40% of the fake news.

This disinformation viralisation can be traced to 2006 when Facebook added the “share” button. It sparked a spiral in propaganda campaigns, legitimate political advertising, and even for-profit troll farms were working their way through an increasingly opaque, algorithmically driven social media ecosystem.

This period presents both legacy media with both predicament and opportunity. Although costly and sluggish compared to the fast-paced world of AI, Chatbots, and social media, a focus on time-tested journalistic processes presents the best path to successful election coverage for legacy media houses. It will ensure they maintain audience trust through transparency, accuracy, fairness, and yes – objectivity (the neutral reporting of “just the facts”).

Legacy media houses and their digital affiliates could deploy their brand appeal as purveyors of election coverage that is bereft of slanted interpretation and biased opinion. They could stamp their authority as sources of ethical journalism by covering the elections as a contest of opposing ideas and personalities in a space crowded with blogs, websites, and social media peddlers of so-called engaged journalism that abjures neutrality as a value. Every masthead and byline must become a mark of unbiased facts, informed analysis, and transparent process. News articles must bear information about the authors, their expert credentials and their values.

At every opportunity, legacy media must coach their audiences to differentiate disinformation, which is false information intentionally shared to mislead, and its cousin; mal-information, which is true information intentionally shared to mislead, from misinformation, which is false information inadvertently shared with no intention of causing harm.

Disinformation is not always a lie or fabrication. It can be the truth deployed to mislead or mixed with lies and fabrication. Members of the public must be told to check the source of the information and query its purpose. Some disinformation is designed to create distrust, hopelessness, apathy, disengagement, and uncertainty about the election. Members of the public plagued with these feelings will not participate in the process.

Media houses face major hurdles because they must be equipped to detect every mode of deepfakes. AI-generated videos and images can look very genuine. Sorting the gem from the garbage can be tough work, especially if the fact-checkers do not have the skills or technology. The ideal deepfake detection tool would be multimodal; able to detect fake video, images, audio, and text. There are fewer tools to detect audio deepfakes. This is mainly because these are more difficult to detect.  

The deep fakes detecting tools are mainly for video-fakes.  To detect deep fakes, it is no longer enough to manually look out for and spot glitches and inconsistencies in videos. There are now deepfake detection tools. The fakers are ahead as the fact-checkers play catch-up, but they continue to get better.

Media houses will need deep ‘behind the scenes’ teams to intervene at least at two points; first down-ranking to screen content before it goes live, moderate content early, and make suspect content less visible. Two is deplatforming or removing rogue information and its purveyors from media platforms.

It helps to know that disinformation thrives on at least four weak points in the communication perception loop. First, disinformation thrives because it is crafted to be catchier than true news. It is designed to spark emotional reactions such as excitement, anger, fear, frustration, and more. Secondly, these emotions propel people to act on the disinformation with investing time and energy to check whether the news they are exposed to is true or fake. Thirdly, when disinformation is spread by many sources, each encounter creates a confirmation bias that it must be true. Fourthly, in a circular reporting model, one source publishes misinformation, which is picked up by more and more outlets in an echo chamber of falsehood.

Effective fact-checking must, therefore, be counterintuitive. It must involve skepticism about catchy breaking news, emotional detachment, avoiding confirmation bias, and verifying before reusing or sharing. 

No Comments yet!

Leave a Reply