Monday, 19 June 2023



We are standing on the cusp of an AI-driven efficiency revolution.

The immense potential of generative AI systems has created a new frontier in almost every aspect of our lives. This brings opportunities for organisations, as well as for those with malicious intent.

AI technology, generative or otherwise, has potential for enormous benefits. For example, a machine learning model was used recently to identify a new antibiotic effective against a hospital-borne, drug-resistant bacteria by processing vast amounts of data in mere hours instead of years of scientific research.1

On the flipside, flaws in facial recognition technology have led to wrongful arrest, 2  and racism. 3 Threat actors can leverage generative AI to augment their attack methods in the same way as any code developer. They are enhancing social engineering and phishing attacks with improved and more targeted narratives. Deepfake technology is generating realistic, but fake, content using voice or video samples. Your ‘CEO’ can now leave you a video or voice message instead of just sending an email.

Concern around AI’s runaway growth has led technology executives, including Tesla founder Elon Musk, to call for all development to be halted until its associated risks are identified. Sam Altman, CEO of OpenAI, has gone before US Congress requesting increased government regulation and oversight of AI development.4

This is not to say that AI technology is to be feared and avoided. We are in the early stages of the revolution and just as email and instant messaging have become tools embedded in all organisations, so too will AI. These technologies also brought new and increased risks – risks that continue today, given they remain the favoured avenues for hackers to obtain their initial foothold. But the benefits easily outweigh the potential downsides.

When adopting generative AI, a key risk mitigation is understanding how these systems are developed and operate. Especially the training data the model relies on for content generation. This and the prompt information entered by users, are the key factors behind the content generated by the platform. Once operational, the systems can be configured to draw also on information from live sources, such as the Internet. Live sources improve the system’s ability to generate current and relevant responses, but they also increase certain risks, notably the threat actor’s ability to create an authoritative sounding, but false response. Even more concerning is the risk of ‘data poisoning’ where threat actors gain access to the AI’s training data to introduce deliberate bias and incorrect outcomes into the core behaviours of the platform.

Of course, AI systems don’t need malicious intervention to cause havoc. Current generative AI platforms are prone to producing highly convincing but inaccurate information. Blind reliance on content generated by these systems, which should be treated as fallible assistants, has resulted in embarrassment and poor outcomes. One recent example in the US is an attorney who faced possible disbarment after asking ChatGPT to provide him with case law histories.5 He had one enormous problem: the chat bot made up the cases in their entirety.

Employee data privacy education is essential. Surveys reveal that companies are leaking confidential information following AI’s rapid adoption. One analysis found 4 per cent of employees have pasted confidential data into ChatGPT.6 Several corporations, including Apple, JP Morgan and Verizon, were so concerned about losing confidential information that they implemented bans on the use of third-party generative AI tools altogether. Central to preventing both accidental and malicious AI misuse incidents is education. Organisations and their employees must ensure they understand how generative AI creates content, who has access to the prompt information entered and where response content is drawn from. Employees should also receive guidance on appropriate AI use.

In embracing AI, we must keep a firm grip on the reins. Human oversight at every step of the process is essential if we are to mitigate AI’s risks and benefit from the momentous efficiencies it promises to deliver.