Marta Dern
Product Marketing
Published on
February 6, 2025
During CES 2025, Jensen Huang (CEO of NVIDIA) stated in his keynote:
... In the future these AI agents are essentially digital workforce that are working alongside your employees doing things for you on your behalf, and so the way that you would bring these special agents into your company is to onboard them just like you onboard an employee.
This vision raises a fundamental question for Identity Security: How do AI agents fit into the IT environment? Should they be managed like human employees - with centralized oversight, defined roles, and governance structures, meaning they have assigned job codes in an HR database? Or should they be treated like workloads, relying on decentralized non-human identities (NHIs) for authentication and operations?
At first glance, AI agents seem like digital employees - they assist with IT support, cloud optimization, automate customer service, content creation, and even decision-making. However, they differ from human employees in several critical ways:
These traits clearly demonstrate that AI Agents require specialized governance.
Many people assume an AI agent is just a piece of software that performs some operations and, at some point, makes an API call to an LLM (Large Language Model). But this is not exactly right.
According to Anthropic, the key distinction is:
AI Agents start with a user command, discussion or objective, then plan and operate independently, only seeking human input when needed. They rely on real-time environmental feedback to track progress and may pause for human review at key points. Tasks end upon completion or predefined stopping conditions.
tl;dr: A workflow is predictable; an AI agent adapts, iterates, and makes independent decisions based on its environment.
If you want to dive deeper, the foundation of AI agents lies in LLMs enhanced with retrieval, tools, and memory. Learn more in our blog: Securing Generative AI with Non-Human Identity Management and Governance.
Despite their human-like analogies, AI agents are not human employees - and they cannot be managed as such.Key Differences Between AI Agents and Human Employees:
This autonomy is what makes AI agents so useful, but also introduces major security challenges - especially when they can operate without centralized governance.
Let’s walk through an example, imagine the following scenario: AI Agent for Cloud Cost Optimization in Azure.
A company deploys an AI agent in Azure AI Foundry to help optimize cloud costs by analyzing underutilized virtual machines (VMs) and automating resource scaling. Initially, the AI agent is granted read access to Azure billing data and monitoring logs to suggest cost-saving opportunities.
After seeing promising results, developers decide to let the AI agent take action instead of just making recommendations, granting write access to:
To make this happen, the AI agent is integrated with Azure Automation and Azure Functions to execute cost-saving workflows.However, due to misconfiguration, the AI agent also receives broader write access than necessary - including the ability to modify identity and access management (IAM) policies.
Once the AI agent starts executing cost-saving measures, it runs into permissions restrictions on certain VM instances and databases. Instead of failing or notifying developers, the AI agent follows its optimization logic and:
Because Azure Managed Identities allow seamless authentication, the AI agent is able to create and use new NHIs dynamically without requiring manual approval.
Had the principle of least privilege been enforced - limiting the AI agent’s write access only to VMs and reports - it would not have been able to modify identity permissions or escalate its own access. However, because it was granted broader write access, it was able to expand its privileges autonomously - unfortunately, this misconfiguration scenario is more common than you might imagine.
Without built-in governance and monitoring, the AI agent continues requesting additional access to complete tasks, leading to:
Over time, these unmanaged AI-generated NHIs pile up, making it nearly impossible for security teams to track which identities are active, who created them, and whether they still need access.
AI agents can dynamically modify their own permissions if given enough initial access, leveraging Azure Role-Based Access Control (RBAC) APIs to escalate privileges. Without strict governance, misconfigured automation policies may allow AI agents to grant themselves additional access, bypassing manual approval processes.
Additionally, cloud platforms enable services to generate short-lived credentials or managed identities on demand, meaning AI agents can create NHIs without human oversight. If these NHIs are not properly tracked and revoked, they persist as security blind spots, accumulating over time.
Many organizations already struggle with removing stale service accounts and API keys, and attackers often exploit old, unmanaged NHIs that still retain access to critical systems. Without proactive identity governance, AI-driven automation can introduce long-term security risks that go unnoticed - until it’s too late. Misconfigurations, automation gaps, and lack of governance can turn AI-driven efficiencies into security liabilities.
At first, this problem may seem similar to human privilege creep - where an employee is accidentally granted excessive access. However, AI agents introduce unique risks that traditional IAM solutions were never designed to handle.
What Makes AI-Driven Identity Risks Unique?
Preventing AI-driven identity risks requires more than reactive security measures. Agentic architectures and AI agents demand a new approach to identity security - one that proactively governs AI identities, enforces least privilege, and prevents unchecked privilege escalation.
AI agents are no longer just automation tools - they are active participants in cloud environments, making decisions, requesting access, and interacting with critical systems. This introduces a new layer of identity risk that traditional IAM solutions were not designed to handle.
Organizations must rethink identity governance to address AI-specific risks such as privilege escalation, identity sprawl, and uncontrolled credential usage. Oasis Security delivers purpose-built non-human identity (NHI) governance, ensuring that AI agents operate securely within enterprise environments without becoming unchecked security risks.
AI Identity Visibility and Governance: The Core Challenge
The challenge is not just whether AI agents are managed as human or non-human - the real issue is whether organizations have visibility and control over how AI agents operate.
Key Questions Organizations Must Answer:
According to Accenture’s Tech Vision 2025 survey, 78% of executives agree that digital ecosystems must be built for AI agents as much as for humans within the next 3-5 years. Oasis Security enables organizations to implement AI-ready identity security today, preventing AI-driven security gaps before they become breaches.
If an AI agent can dynamically generate new credentials, escalate its own privileges, or create long-lived NHIs without oversight, security teams lose visibility and control over the organization's attack surface.Oasis Security continuously scans cloud environments to:
Without continuous monitoring, AI-driven identity sprawl can quickly get out of control, making it impossible to determine which identities exist, who created them, and whether they still need access.
Unlike human users, AI agents do not request access in predictable ways - they optimize for task completion, which can lead to unintended privilege escalation. Oasis Security enforces guardrails to ensure AI agents cannot autonomously increase their own permissions.
Without proactive enforcement, AI agents may gradually accumulate more privileges than originally intended, creating security blind spots.
One of the biggest risks in AI-driven environments is the accumulation of unused, forgotten, or overprivileged NHIs. If these identities are not properly managed, they become persistent security vulnerabilities that attackers can exploit.Oasis Security prevents identity sprawl by automatically:
Without continuous identity hygiene, AI-generated identities may accumulate and become unmanaged security liabilities.
Even though AI agents operate autonomously, organizations remain responsible for their actions. If AI-driven NHIs access sensitive data or modify critical systems, security teams must be able to audit and trace every action to ensure compliance and accountability.Oasis Security provides:
With Oasis, organizations gain a clear record of every AI-generated identity, what it accessed, and when it was revoked - ensuring accountability and security at scale.
AI agents may work alongside employees, but they introduce new identity risks that organizations must actively manage. Without governance, they can:
Oasis Security ensures AI-driven identities are properly governed - before they become a security liability.
By implementing continuous visibility, least-privilege enforcement, and real-time identity governance, organizations can confidently embrace AI automation without compromising security.