Navigating cybersecurity risks in an AI-everything world

Katie McCullough, Chief Information Security Officer at Panzura, warns of the cybersecurity risks associated with AI adoption and discusses how businesses can protect themselves against these risks.

To say that AI has gone mainstream would be an understatement. Just a few years ago, AI models were the preserve of data scientists. Now, the world’s most famous large language AI model, ChatGPT, has a staggering 100 million active monthly users, and around 60% of workers currently use or plan to use generative AI while performing their day-to-day tasks.

The rise of generative AI

ChatGPT, a language model based on the GPT (Generative Pre-trained Transformer) architecture, is designed to understand and generate human-like text based on the input it receives. By training on vast amounts of text from the internet, ChatGPT can answer questions, summarise text, and generate content.

This form of AI is known as ‘generative’ because it can produce new and unique content, such as images, code, text, art, and even music, by training itself using patterns in existing data.

While generative AI offers many productivity benefits, they come at a cost. Just as previous technological leaps – the advent of smartphones or social media, for example – changed the business risk landscape forever, GenAI models like ChatGPT have introduced and amplified concerns about ethics, privacy, misinformation, and cybersecurity risks.

AI regulation is coming

During times of seismic technological change — the new AI era being a case in point — they unleash a whole new raft of cybersecurity threats.

There is typically a time lapse between the initial wave of tech adoption and the formation of regulations and policies to help businesses and governments take advantage of the tech benefits while balancing their risks.

It took years for regulations such as the Children’s Online Privacy Protection Act (COPPA), the Digital Millennium Copyright Act (DMCA), and the General Data Protection Regulation (GDPR) to catch up with the realities of cybercrime, data theft, identity fraud, and so on.

For GenAI, only once robust regulations are in place can we be assured that companies will be held accountable for managing and mitigating cybersecurity threats.

The good news is that regulators have had to super-charge their legislative efforts to keep pace with AI development, and we will see the first policies and laws governing AI coming into force in 2024 in the USA, EU, and China. How effective these regulations prove to be remains to be seen.

ai regulation, generative ai
© shutterstock/3rdtimeluckystudio

China’s approach to AI regulation to date has been light touch. In the US, the legislative situation can get complex, with privacy laws at a federal level hard to enact, often leaving states to handle their own regulation.

What is clear is that security, risk mitigation measures, and regulation are acutely needed. A recent McKinsey study revealed that 40% of businesses intend to step up their AI adoption in the coming year. And, once businesses start using AI, they often increase adoption rapidly.

According to a study by Gartner, 55% of organisations that have deployed AI always consider it for every new use case they are evaluating.

However, while businesses are concerned about the cybersecurity risks relating to GenAI, according to McKinsey’s global study, only 38% are working to mitigate those risks.

What are AI’s biggest cybersecurity risks?

AI’s potential biases, negative outcomes, and false information have been discussed extensively. Fake citations, phantom sources, and even phoney legal cases are just a few cautionary tales about an overreliance on ChatGPT that can easily lead to reputational damage.

While users (should) by now know not to trust implicitly content generated by large language models, there’s a looming threat that many companies might be overlooking: the heightened cybersecurity risks.

By their very nature, AI technologies can amplify the risk of sophisticated cyberattacks. Simple chatbots, for instance, can inadvertently aid phishing attacks, generate fake accounts on social media platforms without errors, and even rewrite malware to target different programming languages.

Moreover, the vast amounts of data fed into these systems can be stored and potentially shared with third parties, increasing the risk of data breaches. In a recent Open Worldwide Application Security Project (OWASP) AI security ‘top 10’ guide, access risks accounted for four vulnerabilities. Other significant risks are the threat to data integrity, which can be poisoned training data, supply chain and prompt injection vulnerabilities, or denial of service attacks.

In the US presential primaries in January 2024, Joe Biden’s voice was mimicked by AI and used in ‘robocalls’ to residents of New Hampshire, downplaying the need to vote. AI-generated voice fraud and deepfakes are now becoming a real risk, with research by McAfee suggesting that fraudsters only need around three seconds of audio or video footage to clone someone’s voice convincingly.

You can only protect what you can see

If the first challenge of securing AI usage within enterprises relates to the novel nature of the attack vectors, another complicating factor is the ‘shadow’ use of AI. According to Forrester’s Andrew Hewitt, 60% of will use their own AI in 2024.

On the one hand, this helps to boost productivity by speeding up and automating parts of people’s jobs. On the other hand, how can businesses mitigate AI’s legal, security, and cybersecurity risks they do not even know they have?

Hewitt calls this trend ‘BOYAI’ (bring your own AI) in an echo of a similar quandary that happened when first employees began using their mobile phones for business purposes in the early 2000s, a reminder that security teams have long had to balance the need to manage risks with the urge to innovate.

AI: Who’s ultimately accountable?

From a legal standpoint and a security, data handling, and compliance perspective, generative AI adoption has been a Pandora’s box of cybersecurity risks.

Until regulatory frameworks and policies catch up with AI development, the onus is on businesses to self-regulate, effectively creating a void in accountability and transparency. Many organisations will spend this time figuring out and formulating best practices and preparing for the likely regulatory impact of legislation such as the EU’s AI Act.

Others will be less proactive and more likely to be caught off guard. With easy access to the growing number of GenAI models on the market, employees could easily inadvertently input sensitive or proprietary information into free AI tools, creating a plethora of vulnerabilities.

These vulnerabilities could lead to unauthorised access or unintentional disclosure of confidential business information, including intellectual property and personally identifiable information.

As AI development races on at breakneck speed and before regulatory positions in key markets are finalised, how can businesses secure their data and limit their exposure to AI risks?

Know your AI usage

Other than official, sanctioned AI apps, security teams need to collaborate with business units to understand how AI is being used. This is not a witch hunt; it’s an important preliminary exercise to understand the demand for AI and the potential value it could bring.

Assess the business impact

Businesses need to evaluate the advantages and disadvantages of each AI usage scenario on a case-by-case basis.

It’s important to understand why certain AI tools are needed and what they—and the business—stand to gain. In some cases, small adjustments to a tool’s data access permissions (for example) will swing the reward/risk ratio, and the tool will become a sanctioned part of the tech stack.

Set clear policies

Good AI governance involves aligning AI tools with the company’s policies and risk posture. This might involve an AI ‘lab’ for testing new AI tools. While AI tools should not be left to individual discretion, employee experimentation should be encouraged – in a controlled manner according to company policy.

Encourage education and awareness

According to Forrester, 60% of employees will receive prompt training in 2024. Along with training on using AI tools effectively, employees must be trained on the cybersecurity risks associated with AI. As AI becomes embedded across all sectors and functions, it becomes increasingly important to make training available to all, regardless of whether they have a technical function.

Practice data hygiene with AI models

Chief Information Security Officers (CISOs) and tech teams cannot achieve good data hygiene independently and should work closely with other business units to classify data.

This helps determine which data sets can be used by AI tools without posing significant risks. For instance, susceptible data can be siloed and off-limits to specific AI tools, while less sensitive data can be used to experiment with to some degree.

Data classification is one of the core principles of good data hygiene and security. It’s also essential to prioritise using local LLMs over public ones where possible.

Anticipate regulatory changes

Regulatory changes are coming; that much is certain. Beware of investing too heavily in specific tools at an early stage. Similarly, staying updated with global AI regulations and standards can help businesses adapt swiftly.

What’s next for AI security?

AI will shape a new digital era that transforms everyday experiences, forges new business models, and enables unprecedented innovation. It will also usher in a new wave of cybersecurity vulnerabilities.

For businesses, one of their most pressing strategic concerns for the year ahead will be balancing the potential productivity gains from AI with an acceptable level of risk exposure.

As organisations worldwide prepare for legislation that will impact them, enterprises can take several proactive steps to identify and mitigate cybersecurity risks while embracing the power of AI.

Contributor Details

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network