AI and cybersecurity – the rise of the malevolent machine?

August 10, 2023

From the strikes taking place across Hollywood, worrying headline captions about radical technological evolution replacing the human workforce to the excitement and concern about ChatGPTs used in misinformation campaigns – Artificial Intelligence (AI) is simultaneously enthralling to those seeking to utilise it and frightening for those envisaging scenes where machines take over the world.

Peel back the term AI and you will find that it is a catch all term for distinctive concepts and technologies such as Machine Learning, Large Language Modelling, Deep Learning, Neural Networks and so forth. Whilst this article does not set out to delve into the complexities of each of these and how they operate, it is sufficient to say that current AI functions by making predictions based on (often very large) ingested data sets.

OpenAI’s Chat Generative Pre-Trained Transformer, or ChatGPT as it is more commonly known, is perhaps the most pervasive AI technology available to the masses. The latest iteration, ChatGPT4, has been hailed as being impressively human-like and described as one of the best AI chatbots to be made available to the public. This is turn has fuelled an AI arms race with big tech competitors eager to bring to market their own version of this technology to enhance their service offerings. Microsoft’s Bing has recently adopted ChatGPT into its search engine.

The fundamental premise of ChatGPT is that it is able to predict what word will appear next in the sentence based on its analysis of a large volume of text data. It is essentially a mathematical and statistical representation tool which produces the response to questions you ask the software.

There is a fascinating array of uses for this technology which provides creativity and efficiencies for individuals and businesses alike. Yet there is also a malicious side which present real risks and throws into doubt the integrity of information you may consume from the internet or from other sources. These concerns have led to signed letters from a collective of rather famous individuals at the helm of some of the largest and most powerful companies in the world requesting a pause in AI development until the risks are better understood.

Risks from malicious intent broadly fall into the following categories:

  1. Integrity of information
  2. Leveraging malicious code
  3. Privacy

We assume for the large part that the data we consume, or which is presented to us, is rooted in some truth. This of course may give way to opinion of the author – the human bias. The issue with ChatGPT or any other AI-driven platform is the noncommittal attribute they all share – they do not have to worry about the truth. The software simply does not care whether the output it produces is true to the real world as long as mathematically the output is correct based on the pool of data it used to generate it and the coding used to interpret the data.

Fake news and misinformation have already wreaked havoc on technology platforms, particularly social media. It has the ability to destabilise political agendas and popular thinking. The threat from AI technologies could amplify this tenfold through generating believable faked content and manipulating the algorithms that determine what we see. In the hands of a manipulative regime, it would serve to be a very powerful tool for managing propaganda.

Another threat from ChatGPT and equivalents, is their ability to generate cyber security threats and attack scenarios. ChatGPT can be used to produce malicious code for malware and even phishing scams. Whilst the sophistication and trade craft may not yet be in the same orbit as an Advanced Persistent Threat actor such as state sponsored criminals, lower-level threat actors are able to readily leverage backstops put in place to craft malicious code. This essentially opens an easier route to market for cybercriminals wanting to engage in phishing and fraudulent activities.

To get a data model to be more accurate and perform better it must be fed more and more information. Whilst creative authors and artists are rightly concerned about plagiarism of their work in the data models that sit behind AI, governments and individuals should also have reason to be concerned. Italy recently became the first country to demand OpenAI stop scraping of Italian citizens data used for its training and data modelling. Legally, there are concerns ChatGPT comes into conflict with GDPR on a number of fronts: the inaccuracy of information that can be generated about individuals, the lack of notification to individuals of their data being scraped, lack of  age restriction and the lack of legal basis for the mass collection of personal data. The key principle under GDPR is a question of whether OpenAI and ChatGPT have a legitimate reason to collect the data.

For all the concerns surrounding this, AI does have very practical and important uses in certain fields such as healthcare and helping businesses achieve certain efficiencies. It is very hard however to gloss over the risks and potential threats – this is after all a very fluid field and fast paced technological developments will yet shape future possibilities.

Organisations should therefore prepare for this by defining a strategy to tackle issues arising from the use of the technology. For now, this should be done using the trinity of People, Process and Technology to identify and respond to potential threats.

Governance and policy structures need to invite the reasonable use protocol for AI tools and set the frontier for what acceptable use may entail. A lot of the principles governing this are still in the early stages of maturity towards regulated practice. However, as with all data driven and data usage toolsets, people need to be aware of the consequences of breaches and their responsibility under law.

Processes should fully encapsulate both the threat from and threat to AI technologies. Both an inwards and outward facing lens is needed to consider holistically how the attack surface will change with its use. Good practices and critical control steps that are already in place should account for AI. Perhaps more than ever before, keeping on top of updates and software standards becomes critical to reduce vulnerability to a breach.

Lastly, constructive dialogue in needed with technology vendors to understand what good use of AI in any environment might look like. One prediction in this space is that identity management and verification might become a key flash point as more and more AI generated content becomes available. Being able to distinguish between what is human and what is not might make the difference in a range of scenarios from political power, public affairs and even in warfare.

Posted in Miscellaneous