Americas

  • United States

Asia

Oceania

Christopher Burgess
Contributing Writer

AI-powered chatbots: the threats to national security are only beginning

Opinion
Apr 25, 20237 mins
Artificial IntelligenceData and Information SecurityGovernment

As ChatGPT burst on the scene, it became quickly apparent that it holds as many threats as it does wonders. Nation-states around the world are beginning to grapple with the dangers posed by chatbots even as they worry about what comes next.

The United Kingdom’s National Cyber Security Center (NCSC) recently issued a warning to its constituents on the threat posed by artificial intelligence (AI) to the national security of the UK. This was followed shortly by a similar warning from NSA cybersecurity director Rob Joyce. It is clear there is great concern from many nations surrounding the challenges and threats posed by AI.

To get a more rounded view of the dangers of bad actors using AI to infiltrate or attack nation-states, I reached out to the industry and found thoughts and opinions, and frankly, some who opted out of the discussion, at least for now.

The NCSC warned that queries are archived and thus could become part of the underlying large language model (LLM) of AI chatbots such as ChatGPT. Such queries could reveal areas of interest to the user and by extension the organization to which they belong. Joyce at the NSA opined that ChatGPT and its ilk will make cybercriminals better at their jobs, especially with the ability of a chatbot to improve phishing verbiage, making it sound more authentic and believable to even sophisticated targets.

Secret leakage through queries

As if on cue, Samsung revealed that it had admonished its workforce to use ChatGPT functionality with care. An employee wished to optimize a confidential and sensitive product design and let the AI engine do its thing — it worked, but also left a trade secret behind and ultimately inspired Samsung to begin developing its own ML software for internal use only.

Speaking about the Samsung incident, CODE42 CISO Jadee Hanson observed that despite its promising, advancements, the explosion of ChatGPT has ignited many new concerns regarding potential risks. “For organizations, the risk intensifies as any employee feeds data into ChatGPT,” she tells CSO.

“ChatGPT and AI tools can be incredibly useful and powerful, but employees need to understand what data is appropriate to be put into ChatGPT and what isn’t, and security teams need to have proper visibility to what the organization is sending to ChatGPT. With all new powerful technology advances, there come risks that we need to understand to protect our organizations.”

In a word, once you hit “enter” the information is gone and no longer under your control. If the information was considered a trade secret, this action may be enough for it to be declared a secret no more. Samsung observed that “such data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question.” 

It is not difficult to extrapolate how such queries originating from within a government, especially the classified information side of government, could put national security at risk.

AI changes everything

Earlier in 2023, Dr. Jason Matheny, president and chief executive officer of RAND Corporation, outlined the four prime areas that his organization saw as national security concerns in testimony before the Homeland Security and Governmental Affairs committee.

  • The technologies are driven by commercial entities that are frequently outside our national security frameworks.
  • The technologies are advancing quickly, typically outpacing policies and organizational reforms within government.
  • Assessments of the technologies require expertise that is concentrated in the private sector and that has rarely been used for national security.
  • The technologies lack conventional intelligence signatures that distinguish benign from malicious use, differentiate intentional from accidental misuse, or permit attribution with certainty.

It is not hyperbole or exaggeration to state that AI will change everything.

The rising fear of AutoGPT

I had a wide-ranging discussion with Ron Reiter, CTO of Sentra (who previously served within Unit 8200, within the Israeli National Defense Force), in which he commented that his primary fear will be found with the advent of AutoGPT or AgentGPT, AI entities that could deploy with the GPT engine operating as a force multiplier — improving attack efficiency not by a hundredfold but by many thousandfold. An adversary gives AutoGPT the task and internet connectivity and the machine goes and goes (think the Energizer Bunny) until completion. In other words, malware operates on its own. With AutoGPT, the adversary has a tool that can be both persistent and scaled.

Reiter is not alone. Patrick Harr, CEO of SlashNext, offered that hackers are using ChatGPT to deliver a higher volume of unique, targeted attacks faster, creating a higher likelihood of a successful compromise. “There are two areas where chatbots are successful today: malware and business email compromise (BEC) threats,” Harr says. “Cyberattacks are most dangerous when delivered with speed and frequency to specific targets in an organization.”

Creating infinite code variations

ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines,” Harr says. “BEC attacks are targeted attempts to social engineer a victim into giving valuable financial information or data. These attacks require personalized messages to be successful. ChatGPT can now create well-written, personal emails en masse with infinite variations. The speed and frequency of these attacks will increase and yield a higher success rate of user compromises and breaches, and there has already been a significant increase in the number of breaches reported in the first quarter of 2023.”

Additionally, Reiter noted, the ability of chatbots to mimic humans is very real. One should expect entities such as the Internet Research Agency, long associated with Russian active measures, specifically misinformation and disinformation, to be working overtime to evolve capabilities to capture a specific individual’s tone, tenor, and syntax. The target audience may know that such is possible, but when confronted with content from the real individual and mimicked content, who are they going to believe? Trust is at stake.

Harr emphasized that it will take security powered by similar machine learning to mitigate the problem: “You have to fight AI with AI.”

Should the world pause AI tool development?

Warnings from security agencies around the world would seem to align with an open letter signed by many who have a dog in the hunt that calls for a pause on the development of AI tools. But it would seem to be too late for that, as evidenced by a recent US Senate Armed Forces Committee hearing on the state of artificial intelligence and machine learning applications in improving Department of Defense operations at which the consensus was that a pause by the United States would be deleterious to the national security of the country.

Those testifying, RAND’s Matheny, Palantir CTO Mr. Shyam Sankar, and Shift5 co-founder and CEO Josh Lospinoso, agreed that the United States currently enjoys an advantage and such a pause would give adversarial nations an opportunity to catch up and create AI models against which the US would be hard pressed to defend itself. That said, there was a universal call for controls to be placed on AI technology from those testifying, as well as a bipartisan agreement within the subcommittee.

The subcommittee called for the three to collaborate with others of their choosing and return in 30 to 60 days with recommendations on how the government should be looking at regulating AI within the context of protecting national security. One senses, from the conversations during the April 19 hearing, that one may expect AI technologies to be designated as dual-use technologies and fall under ITAR (International Traffic in Arms Regulations), which doesn’t prohibit international collaboration or sharing, yet requires the government to have a say.

Christopher Burgess
Contributing Writer

Christopher Burgess is a writer, speaker and commentator on security issues. He is a former senior security advisor to Cisco, and has also been a CEO/COO with various startups in the data and security spaces. He served 30+ years within the CIA which awarded him the Distinguished Career Intelligence Medal upon his retirement. Cisco gave him a stetson and a bottle of single-barrel Jack upon his retirement. Christopher co-authored the book, “Secrets Stolen, Fortunes Lost, Preventing Intellectual Property Theft and Economic Espionage in the 21st Century”. He also founded the non-profit, Senior Online Safety.

More from this author