‍Securing Generative AI with Non-Human Identity Management and Governance

Joel McKown

Joel McKown

Solutions Engineer

Published on

April 4, 2024

There are many inevitabilities in technology, among them is that rapid innovation will introduce unique risks and 3 letter acronyms will abide. Generative AI conversations have become top of mind, as business races to find the most value from a new technological arena, one that will transform our world in much the same way as other technological epochs like the emergence of the internet have. As we pursue the potential value of AI driven apps and automation, we will always need to consider the implications of the safe usage and implementation of such technologies. I will introduce some potentially new concepts and explain the need for proper non-human identity governance to ensure the privacy and integrity of data used in applications implemented under the RAG based architectural model.

What is retrieval augmented generation (RAG) architecture?

RAG is an agnostic architecture that allows the power of LLM’s (large language models) like OpenAI’s GPT (generative pre-trained transformer)  to leverage “grounding data”, data specific to a customer use case to power chat or Q&A based applications. The blending of the conversational power of LLM and local focused data sets make for a powerful tool to enable apps that drive more robust customer interactions and\or employee productivity. An example implementation would be leveraging a LLM of your choice tied to a local data store of product documentation to answer domain specific questions about your product. All the power of human-like articulation, powered by LLM with the relevance of your data, to drive a productive use case, and fully autonomous.

AI will further increase the non-human identity attack surface

AI will further increase the non-human identity attack surface


NHI is a digital construct that describes the credentialed access leveraged for machine to machine communication. These identities include service accounts, tokens, access keys, API keys, and countless others. NHI is the most rapidly expanding type of identity and the least governed attack surface for organizations today. The creation of NHI is democratized across dev, ops, and other teams, generally self governed and proliferates at the pace of digital innovation at an organization. NHI is more liberally leveraged in the cloud where the identity itself becomes the perimeter, or in other words the only form of access control. The combination of poor NHI governance and ubiquitous access has created a situation where the risk has become very pronounced. That risk is not going away and has expanded dramatically with the adoption of cloud and is on the precipice of further significant expansion with the advent of AI.

The intersection of NHI And RAG

In the diagram above I outline the basic flow of the RAG based architecture, and quickly we can see where NHI would drive the bridges of communication for backend machine to machine interaction. I will focus your attention on the data sources as I feel this is where implementation risk is most likely to live. One predominant implementation pattern observed online is the consistent use of storage accounts as a repository for unstructured data leveraged in the implementation of RAG architecture driven apps.  

Exploring some of the access methods leveraged for storage accounts in cloud environments like Azure can help us understand some of the potential risks. Azure blob storage allows many forms of identity and access management, SAS tokens, service principals (Entra ID), and access keys among them. When configuring any of these access methods the utmost care should be taken to ensure least privilege and adherence to accepted best practices. It is unfortunately commonplace to see very old and unrotated (full access by default) access keys, SAS tokens with privileged access and very long TTL (time-to-live), or service principals whose usage is stale, secrets unrotated and not expiring any time soon. These examples are just some of the scope of the NHI that can introduce unwanted risk to a RAG based application.  

Secrets used to assume non-human identities like those described above and others are stolen, accidentally exposed, and kept by former employees upon exit. The resulting risk to our app becomes multi pronged. Data privacy for any AI training data is a priority, sensitive data and the identities used to access should be locked down, monitored, and their lifecycle managed properly. Improper hygiene of NHI can lead to data leakage and if you think that's not likely in the context of the examples given above, review the recent Microsoft AI teams own data exposure incident involving SAS tokens.  

Data poisoning also creates a very unique risk in RAG based architectures. Data sources in RAG architectures are likely cloud based and editable via NHI.  The integrity of the training data and as a result of the responses we provide our customers\employees via our AI enabled chat bot should be protected from unauthorized additions of training material. The potential organizational risk from an incorrect or unpleasant response, as a result of the malicious pollution of training data sets via misconfigured or poorly maintained NHI, should be measured and accounted for.

Oasis provides the visibility and lifecycle management you need to safely deploy technologies in the era we live in.

  • Inventory of NHI across diverse multi cloud\SaaS provider environments and on prem
  • Actionable insights into the most egregious NHI posture issues with guided or automated remediation
  • Ongoing automation of lifecycle management for critical NHI tied to high value projects
  • Complete context around NHI usage, from consumption to entitlements. Understand everything you need to know about NHI usage in your environment. This includes those elusive SAS tokens mentioned above!
More like this