Last weekend, at a typical South African braai (barbeque), I found myself in a heated conversation with someone highly educatedâyet passionately defending a piece of Russian propaganda that had already been widely debunked. It was unsettling. The conversation quickly became irrational, emotional, and very uncomfortable. That moment crystallised something for me: weâre no longer just approaching an era where truth is under threatâweâre already living in it. A reality where falsity feels familiar, and information is weaponised to polarize societies and manipulate our belief systems. And now, with the democratisation of AI tools like deepfakes, anyone with enough intent can impersonate authority, generate convincing narratives, and erode trustâat scale.
The evolution of disinformation: From election interference to enterprise exploitation
The 2024 KnowBe4 Political Disinformation in Africa Survey revealed a striking contradiction: while 84% of respondents use social media as their main news source, 80% admit that most fake news originates there. Despite this, 58% have never received any training on identifying misinformation.
This confidence gap echoes findings in the Africa Cybersecurity & Awareness 2025 Report, where 83% of respondents said theyâd recognise a security threat if they saw oneâyet 37% had fallen for fake news or disinformation, and 35% had lost money due to a scam.
Whatâs going wrong? Itâs not a lack of intelligenceâitâs psychology.
The psychology of believing the untrue
Humans are not rational processors of information; weâre emotional, biased, and wired to believe things that feel easy and familiar. Disinformation campaignsâwhether political or criminalâexploit this.
- The illusory truth effect: The easier something is to process, the more likely we are to believe itâeven if itâs false (Unkelbach et al., 2019). Fake content often uses bold headlines, simple language, and dramatic visuals that âfeelâ true.
- The mere exposure effect: The more often we see something, the more we tend to like or accept itâregardless of its accuracy (Zajonc, 1968). Repetition breeds believability.
- Confirmation bias: Weâre more likely to believe and even share false information when it aligns with our values or beliefs.
A recent example is the viral deepfake image of Hurricane Helena shared across social media. Despite fact-checkers identifying it as fake, the post continued to spread. Why? Because it resonated emotionally with usersâ felt frustration and emotional frame of mind.

Deepfakes and state-sponsored deception
According to the Africa Centre for Strategic Studies, disinformation campaigns on the continent have nearly quadrupled since 2022. Even more troubling: nearly 60% are state-sponsored, often aiming to destabilise democracies and economies. The rise of AI-assisted manipulation adds fuel to this fire. Deepfakes now allow anyone to fabricate video or audio thatâs nearly indistinguishable from the real thing.
Why this matters for business;
This isn’t just about national security or political manipulation âitâs about corporate survival too. Todayâs attackers donât need to breach your firewall. They can trick your people. This has already led to corporate-level losses, like the Hong Kong finance employee tricked into transferring over US$25 million (approx. UGX91.67 billion / ZAR473.2 million) during a fake video call with deepfaked âexecutives.â These corporate disinformation or narrative-based attacks can also result in:
- Fake press releases can tank your stock.
- Deepfaked CEOs can authorise wire transfers.
- Viral falsehoods can ruin reputations before PR even logs in.
The WEFâs 2024 Global Risk Report named misinformation and disinformation as the top global risk, surpassing even climate and geopolitical instability. Thatâs a red flag businesses cannot ignore.
The convergence of state-sponsored disinformation, AI-enabled fraud, and employee overconfidence creates a perfect storm. Combating this new frontier of cyber risk requires more than just better firewalls. It demands informed minds, digital humility, and resilient cultures.
Also read:
Building cognitive resilience
What can be done? While AI-empowered defenses can help improve detection capabilities, technology alone wonât save us. Organisations must also build cognitive immunityâthe ability for employees to discern, verify, and challenge what they see and hear.
- Adopt a zero-trust mindsetâeverywhere:
Just as systems donât trust a device or user by default, people should treat information the same way, with a healthy dose of scepticism. Encourage employees to verify headlines, validate sources, and challenge urgency or emotional manipulationâeven when it looks or sounds familiar. - Introduce digital mindfulness training:
Train employees to pause, reflect, and evaluate before they click, share, or respond. This awareness helps build cognitive resilienceâespecially against emotionally manipulative or repetitive content designed to bypass critical thinking. Educate on deepfakes, synthetic media, AI impersonation, and narrative manipulation. Build understanding of how human psychology is exploitedânot just technology. - Treat disinformation like a threat vector:
Monitor for fake press releases, viral social media posts, or impersonation attempts targeting your brand, leaders, or employees. Include reputational risk in your incident response plans.
The battle against disinformation isnât just a technical oneâitâs psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone.
Editor’s Note: This article was written by Anna Collard, SVP Content Strategy and Evangelist, KnowBe4 Africa