Solving enterprise AI governance with a data-centric approach

Eamonn O’Neill, Co-Founder & Chief Technology Officer at Lemongrass, discusses the challenges businesses face in securing and governing generative AI tools, proposing a strategy that integrates existing data governance policies.

The approach that many businesses have taken to securing GenAI tools and services – and the data that powers them – has been a mess.

Some organisations have become so wary of exposing sensitive information to GenAI services like ChatGPT that they block them altogether on their corporate networks – which is generally a kneejerk and ineffectual approach. Employees who want to use these services can easily access them in other ways, such as via their personal communications devices.

In other cases, businesses have attempted to shape AI security and governance strategies around regulatory requirements. Because there has been minimal global regulatory guidance on GenAI to date, the result is often chaotic, ever-shifting AI governance policies that may or may not align with the mandates regulators eventually settle on.

Here’s a better approach: use existing data security and governance policies as the foundation for managing generative AI services within an organisation. This method is superior for securing generative AI, and here’s what it looks like in practice.

The need for GenAI governance

There’s no denying that enterprises need to develop and enforce clear security and governance policies for GenAI. Such services that businesses deploy internally can potentially access highly sensitive enterprise data, with major implications for data privacy and security.

For example, if an employee feeds proprietary business information into ChatGPT as part of a prompt, ChatGPT could theoretically expose that data to a competitor at any point thereafter. Since enterprises have no control over how ChatGPT operates, businesses can’t control how ChatGPT uses their data once it ingests it.

Likewise, there is no way to ‘delete’ sensitive data from GenAI models. Once ingested, it’s there forever, or at least until the model ceases to operate. In this sense, GenAI within the enterprise raises deep challenges related to businesses’ ability to control the lifecycle of private information. You can’t simply delete private data from a GenAI model once you no longer need that data in the same way that you could delete private data from a database or file system.

Complicating these challenges is the multitude of GenAI services from various vendors. Because of this diversity, there is no easy way of implementing access controls that define which employees can perform which actions across the disparate GenAI solutions that enterprises might adopt. Identity management frameworks like Active Directory might eventually evolve to support unified sets of access controls across GenAI services, but they’re not yet there.

For these reasons, enterprises must define security and governance rules for GenAI. Specifically, rules need to control which data GenAI models can access, how they can access that data, and which access controls must be in place to manage employees’ interactions with GenAI services.

Data governance as the basis for GenAI governance

Most organisations recognise the importance of AI governance. However, as mentioned previously, implementing effective governance policies and controls has proved quite challenging for many organisations, largely because they don’t know where to begin.

One practical way to solve this challenge is to model AI governance rules on the data governance policies that most businesses have long had in place. After all, many of the privacy and security issues at stake surrounding GenAI ultimately boil down to data privacy and security issues – and so data governance rules can be extended to govern AI models, too.

What this means in practice is erecting access controls within GenAI services that restrict which data those services can access, based on the data governance rules that a business already has in place. Implementing the controls will be different because businesses will need to rely on access control tools that support generative AI models rather than access controls for databases, data lakes, and so on. But the result is the same, in the sense that the controls will define who can do what with an organisation’s data.

This approach is particularly effective because it lays the groundwork for adopting GenAI services as a new interface for accessing and querying business data. As long as you properly govern and secure your GenAI services, you can have employees rely on those services to ask questions about your data – and you can have confidence that the level of access each employee has is appropriate, thanks to the AI governance controls you’ve built.

A simple, efficient approach to data governance and AI governance

Ultimately, the approach to AI governance does more than provide a clear foundation (in the form of data governance rules) for deciding which data the users of enterprise AI services can and can’t access. It also simplifies data governance itself because it minimises the need to implement access controls for each data resource.

When GenAI services become a centralised interface for interacting with data, businesses can simply enforce data governance through GenAI. This is much easier and more efficient than establishing different controls for every data asset within the organisation.

Thus, instead of shooting in the dark to try to come up with enterprise AI governance policies – or, worse, blocking AI services altogether and crossing your fingers that employees don’t work around your restrictions – take stock of the data governance rules you already have in place, and use them as a pragmatic basis for defining AI governance controls.

Contributor Details

Eamonn
O’Neill
Lemongrass
Co-Founder & Chief Technology Officer

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network