A New Era of Manipulation: How Deepfakes and Disinformation Threaten Business

A reality where falsity feels familiar, and information is weaponized to polarize societies and manipulate our belief systems. And now, with AI tools like deepfakes, anyone with enough intent can impersonate authority, and generate convincing narratives.
Deepfake abstract concept. PHOTO: Arkadiusz Warguła/iStock Deepfake abstract concept. PHOTO: Arkadiusz Warguła/iStock
Deepfake abstract concept. PHOTO: Arkadiusz Warguła/iStock

Last weekend, at a typical South African braai (barbeque), I found myself in a heated conversation with someone highly educated—yet passionately defending a piece of Russian propaganda that had already been widely debunked. It was unsettling. The conversation quickly became irrational, emotional, and very uncomfortable. That moment crystallised something for me: we’re no longer just approaching an era where truth is under threat—we’re already living in it. A reality where falsity feels familiar, and information is weaponised to polarize societies and manipulate our belief systems. And now, with the democratisation of AI tools like deepfakes, anyone with enough intent can impersonate authority, generate convincing narratives, and erode trust—at scale.

The evolution of disinformation: From election interference to enterprise exploitation

The 2024 KnowBe4 Political Disinformation in Africa Survey revealed a striking contradiction: while 84% of respondents use social media as their main news source, 80% admit that most fake news originates there. Despite this, 58% have never received any training on identifying misinformation.

This confidence gap echoes findings in the Africa Cybersecurity & Awareness 2025 Report, where 83% of respondents said they’d recognise a security threat if they saw one—yet 37% had fallen for fake news or disinformation, and 35% had lost money due to a scam.

What’s going wrong? It’s not a lack of intelligence—it’s psychology.

The psychology of believing the untrue

Humans are not rational processors of information; we’re emotional, biased, and wired to believe things that feel easy and familiar. Disinformation campaigns—whether political or criminal—exploit this.

  1. The illusory truth effect: The easier something is to process, the more likely we are to believe it—even if it’s false (Unkelbach et al., 2019). Fake content often uses bold headlines, simple language, and dramatic visuals that “feel” true.
  2. The mere exposure effect: The more often we see something, the more we tend to like or accept it—regardless of its accuracy (Zajonc, 1968). Repetition breeds believability.
  3. Confirmation bias: We’re more likely to believe and even share false information when it aligns with our values or beliefs.

A recent example is the viral deepfake image of Hurricane Helena shared across social media. Despite fact-checkers identifying it as fake, the post continued to spread. Why? Because it resonated emotionally with users’ felt frustration and emotional frame of mind.

An AI-generated image spreading across social media of a girl holding a puppy in the aftermath of Hurricane Helene AI-generated image circulating on social media.
An AI-generated image spreading across social media of a girl holding a puppy in the aftermath of Hurricane Helene AI-generated image is circulating on social media.

Deepfakes and state-sponsored deception

According to the Africa Centre for Strategic Studies, disinformation campaigns on the continent have nearly quadrupled since 2022. Even more troubling: nearly 60% are state-sponsored, often aiming to destabilise democracies and economies. The rise of AI-assisted manipulation adds fuel to this fire. Deepfakes now allow anyone to fabricate video or audio that’s nearly indistinguishable from the real thing.

Why this matters for business;

This isn’t just about national security or political manipulation —it’s about corporate survival too. Today’s attackers don’t need to breach your firewall. They can trick your people. This has already led to corporate-level losses, like the Hong Kong finance employee tricked into transferring over US$25 million (approx. UGX91.67 billion / ZAR473.2 million) during a fake video call with deepfaked “executives.” These corporate disinformation or narrative-based attacks can also result in:

  • Fake press releases can tank your stock.
  • Deepfaked CEOs can authorise wire transfers.
  • Viral falsehoods can ruin reputations before PR even logs in.

The WEF’s 2024 Global Risk Report named misinformation and disinformation as the top global risk, surpassing even climate and geopolitical instability. That’s a red flag businesses cannot ignore.

The convergence of state-sponsored disinformation, AI-enabled fraud, and employee overconfidence creates a perfect storm. Combating this new frontier of cyber risk requires more than just better firewalls. It demands informed minds, digital humility, and resilient cultures.

Also read:

Building cognitive resilience

What can be done? While AI-empowered defenses can help improve detection capabilities, technology alone won’t save us. Organisations must also build cognitive immunity—the ability for employees to discern, verify, and challenge what they see and hear.

  1. Adopt a zero-trust mindset—everywhere:
    Just as systems don’t trust a device or user by default, people should treat information the same way, with a healthy dose of scepticism. Encourage employees to verify headlines, validate sources, and challenge urgency or emotional manipulation—even when it looks or sounds familiar.
  2. Introduce digital mindfulness training:
    Train employees to pause, reflect, and evaluate before they click, share, or respond. This awareness helps build cognitive resilience—especially against emotionally manipulative or repetitive content designed to bypass critical thinking. Educate on deepfakes, synthetic media, AI impersonation, and narrative manipulation. Build understanding of how human psychology is exploited—not just technology.
  3. Treat disinformation like a threat vector:
    Monitor for fake press releases, viral social media posts, or impersonation attempts targeting your brand, leaders, or employees. Include reputational risk in your incident response plans.

The battle against disinformation isn’t just a technical one—it’s psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone.

See also: Cybersecurity threats are mounting, but it’s Gen Zs and Alphas who are introducing risks — not “Older Folks”

Editor’s Note: This article was written by Anna Collard, SVP Content Strategy and Evangelist, KnowBe4 Africa