Trend Micro's Dustin Childs Discusses LLMs Hacking Ability

Share
Trend Micro's Dustin Childs explores LLM's ability to autonomously exploit real-world systems
Although many many in the cyber field are concerned about LLM's ability to augment attacks, Trend Micro's Dustin Childs why you need not panic just yet

In the ever evolving landscape of AI, Gen AI, and in turn, Large Language Models (LLMs) like GPT-4 have become a topic of both fascination and concern in the cybersecurity community

These advanced AI systems, capable of generating human-like text and code, have raised questions about their potential to autonomously exploit vulnerabilities in real-world systems.

One concern stems from LLMs' capability to process and generate complex code structures. In theory, this ability could be leveraged to autonomously identify vulnerabilities in software systems, craft exploit code, or even execute sophisticated cyber attacks like SQL injections. 

As the capabilities of LLMs continue to expand, cybersecurity experts are closely examining the implications for both defensive and offensive security measures. But what are all the implications cyber professionals need to contend with?

To find out more, we spoke with Dustin Childs, Head of Threat Awareness at Trend Micro, about LLMs current and future capabilities to cause cybersecurity professionals a headache. 

Dustin Childs bio
  • A recognised expert in the field for response communications, intrusion detection, and network security, Dustin has a proven aptitude for bridging the gap between engineering and marketing to deliver results. He has over 15 years of experience performing diverse functions with a major concentration in the arenas of information assurance and network defence.

Current state of LLMs in cybersecurity

LLMs and Gen AI systems like GPT-4 have sparked discussions about their potential capabilities in cybersecurity, particularly their ability to autonomously hack systems.

The good news is, while these AI models have shown impressive capabilities in natural language processing and code generation, their application in autonomous hacking is still limited and largely theoretical.

"As of today, they can't. LLMs like GPT-4 or Microsoft's Co-Pilot are powerful tools for Natural Language Processing (NLP) and generation," said Dustin. "But they are not inherently designed to autonomously perform hacking or complex attacks such as SQL injections."

This clarifies that while LLMs can generate code snippets for common exploits when prompted, they lack the inherent ability to autonomously execute complex cyber attacks.

Yet, in analysing the key capabilities of advanced LLMs in finding and exploiting vulnerabilities, Dustin notes: "LLMs are not currently capable of autonomously finding or exploiting vulnerabilities. However, LLMs can assist in gathering information on potential vulnerabilities by summarising known exploits, providing details on how certain vulnerabilities are exploited, and even suggesting tools or techniques to use in penetration testing.

"This means that they rely on external scripts or human operators to carry out actions on real-world systems, limiting their ability to autonomously exploit vulnerabilities."

This highlights the current role of LLMs as assistive tools rather than autonomous hacking entities. They can provide valuable information and suggestions, but cannot independently carry out complex attacks.

LLMs augmenting abilities

However, the potential misuse of LLMs by threat actors is a concern. 

"Threat actors can leverage LLMs capabilities to aid in the creation of exploits, amplifying their malicious activity," says Dustin. "Let's consider SQL injections as an example. A threat actor might prompt the LLM to generate different payloads to test various input fields of a web application for SQL injection vulnerabilities.

"They can also use these payloads in the target web application and analyse the responses. If the response changes in a way that indicates a successful injection, further exploitation might be possible," he explains.

This scenario illustrates how malicious actors could potentially use LLMs to enhance their attack strategies, even if the models themselves cannot autonomously execute attacks.

LLMs v traditional cybersecurity tools

Human's still play a key role in cybersecurity today, as AI is unlikely to autonomously hack systems without human intervention or knowledge of the vulnerabilities in the near future.

"At this point, LLMs cannot produce results similar to other automated forms of reverse engineering and exploit development. For example, fuzzing remains a better technology than LLMs when it comes to finding bugs within a closed-source application."

This comparison underscores that established cybersecurity techniques and tools still outperform LLMs in practical application.

Looking to the future, Childs suggests a more likely scenario for LLM application in cybersecurity.

"LLMs can be trained to review code for problems before a product ships. It is more likely that this form of code review will be common before an LLM gains the capability to autonomously find vulnerabilities."

This perspective highlights the potential for LLMs to contribute positively to cybersecurity by enhancing code quality and identifying vulnerabilities before they can be exploited.

While LLMs have shown impressive capabilities in language processing and code generation, their ability to autonomously hack systems remains limited. Their current value in cybersecurity lies more in augmenting human expertise and automating benign tasks rather than in autonomous exploitation.

"By combining technical controls, ethical guidelines, and continuous monitoring, it’s possible to harness the benefits of LLMs while minimising the risks associated with their misuse in autonomous hacking and other malicious activities," Dustin concludes.

As these technologies continue to evolve, it will be crucial to implement safeguards and ethical guidelines to ensure their responsible use and be prepared for their adversarial use in the field of cybersecurity.

******

Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

Cyber Magazine is a BizClik brand

Share

Featured Articles

Cisco Talos: Tracking Ransomware’s 35 Year Evolution

Martin Lee, Technical Lead for Security Research, Cisco Talos highlights how the ransomware landscape has shifted across the last 35 years

Resilience: Firms Fail to Grasp Cyber Financial Impact

Resilience and YouGov survey reveals 74% of mid to large UK businesses face cybercrime, while ransomware understanding lags behind data breach concerns

SonicWall and CrowdStrike Unite for SMB Security Service

SonicWall partners with endpoint protection specialist CrowdStrike to offer managed detection and response capabilities through managed service providers

FS-ISAC CISO Talks Cyber Strategies for Financial Providers

Cyber Security

Darktrace Reports 692% Surge in Black Friday Cyber Scams

Cyber Security

KnowBe4 Launches AI Agents to Counter Phishing Threats

Technology & AI