Trend Micro's Lewis Duke on Fighting AI-Powered Fake News

Share
Lewis' works as SecOps Risk & Threat Intelligence at Trend Micro
Although cyber is not the obvious answer for fighting AI-powered fake news, Trend Micro's Lewis Duke explains how upcoming elections have made it so

The proliferation of AI-augmented attacks is beginning to be well known in the cyber sector. Yet, what is less pondered over is AI augmented social engineering to sow misinformation or doubt. 

Although not usually in the wheelhouse of a cybersecurity team, the upcoming elections means many of them are going to be at the forefront of fighting it. With it not being a typical breach, it means that new ways of tracking, detecting and fighting off the threat are having to be come up with.

AI-powered bots, Gen AI reasoning and writing abilities and deepfakes are now a real concern, and this comes at a pivotal time. With the US election looming, and distrust about the result of the election already being uttered, cybersecurity professionals are under increasing pressure to make sure any disruptions are dealt with.  

To find out what can be done to fight this, we spoke with Lewis Duke, SecOps and Threat Intelligence Lead at Trend Micro for his insights into the new threat. 

Lewis Duke bio
  • In Lewis’ current role as SecOps Risk & Threat Intelligence at Trend Micro, he helps businesses recognise weaknesses and develop and mature their security operations using Trend Micro’s portfolio of products. This is all part of his philosophy of wanting the world to operate in a secure digital world in which we can all live our lives in safety.

AI’s election-meddling capabilities 

The proliferation of AI has ushered in a new era of sophisticated disinformation tactics, particularly targeting elections. These AI-powered campaigns exploit our modern information consumption habits, spreading false narratives at unprecedented speeds and scales.

"AI is being used for disinformation ahead of elections because it is such an effective and easy tool for undermining trust in political figures and parties,” explains Lewis. “Our modern consumption of information through social media, which often presents information in brief, concentrated doses, is ideal for the spread of 'fake news.’”

This rapid dissemination of unverified information poses a significant challenge for those tasked with maintaining the integrity democratic processes. The speed at which these campaigns can reach millions creates a race against time for fact-checkers to then inform cyber professionals to report on.

Where AI has presented a growing threat is the plausibility which is now having to be dealt with. “AI can significantly enhance previous disinformation efforts through its capacity to process and analyse vast data sets,” Explains Lewis. 

“This ability allows AI to generate content that aligns with existing disinformation narratives, and with that making the false information seem more credible."

Equally, AI tools are now more approachable than ever, and have such lowered the barrier to entry for those wishing to launch disinformation campaigns. This means the volume of disinformation may now be experienced at an unprecedented level.

That also means more sophisticated social manipulation vectors are now open. 

“AI is being used to create deepfake audio clips, which can be used in mobile messaging, and deepfake videos, where politicians appear to make inflammatory or false statements,” Lewis states. 

AI-powered deep fakes, which utilise advanced machine learning (ML) algorithms to create highly realistic but entirely fabricated audio and video content, pose a significant threat to the dissemination of accurate information.

By seamlessly altering images, videos, and voices, deep fakes can convincingly impersonate public figures, leading to the spread of false information and deceptive narratives. 

But it’s not just the public who are to be warned. Organisations could be used as vessels to spread misinformation. If, for instance, an attacker managed to gain entrance to a company’s system and take hold of a number of its users’ computers, then they could make the information seem legitimate. 

“Organisations have to be aware of the rising threat of multi-vector attacks, particularly in the context of phishing campaigns,” Lewis expalins. “These attacks could increase the perceived credibility of disinformation efforts if launched across multiple platforms simultaneously.”

Fighting the tide of misinformation

As with other social manipulation tactics the cyber sphere is all too aware of, the problem comes from people.

"The proliferation of AI necessitates that cyber leaders place a heightened emphasis on user awareness as a primary defence mechanism,” Lewis explains. 

Like companies sending employees information anti-phishing campaigns telling them not to click links from unknown senders, organisations must look at outreach as attacking the issue before it becomes a problem.

While the challenges are significant, Duke offers some hope that savvy enough users, or good enough AI systems, can spot the signs.

"While AI-generated content can be highly convincing, it often contains subtle errors in language, context, or visual details that can betray its inauthenticity. These discrepancies, although sometimes minor, can be identified by trained analysts or automated detection tools."

A number of companies are working on AI tools that can spot deepfakes

As we navigate this new landscape of AI-powered disinformation, the importance of critical thinking, user education, and advanced detection tools cannot be overstated. The integrity of the democratic processes depends on an informed public’s ability to think, and a cyber community’s ability to communicate against the potential risk.

******

Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Cyber Magazine is a BizClik brand

Share

Featured Articles

Resilience: Firms Fail to Grasp Cyber Financial Impact

Resilience and YouGov survey reveals 74% of mid to large UK businesses face cybercrime, while ransomware understanding lags behind data breach concerns

SonicWall and CrowdStrike Unite for SMB Security Service

SonicWall partners with endpoint protection specialist CrowdStrike to offer managed detection and response capabilities through managed service providers

FS-ISAC CISO Talks Cyber Strategies for Financial Providers

FS-ISAC CISO JD Denning explains the cyber strategies financial providers need to adopt in order to stay afloat in the wave of cyber attacks

Darktrace Reports 692% Surge in Black Friday Cyber Scams

Cyber Security

KnowBe4 Launches AI Agents to Counter Phishing Threats

Technology & AI

Gen Reports 614% Rise in Command Prompt Manipulation Scams

Cyber Security