AI Agents: Human or Non-Human?

Blog thumbail: AI agents human or non-human
Marta Dern

Marta Dern

Product Marketing

Published on

February 6, 2025

During CES 2025, Jensen Huang (CEO of NVIDIA) stated in his keynote

... In the future these AI agents are essentially digital workforce that are working alongside your employees doing things for you on your behalf, and so the way that you would bring these special agents into your company is to onboard them just like you onboard an employee.

This vision raises a fundamental question for Identity Security: How do AI agents fit into the IT environment? Should they be managed like human employees - with centralized oversight, defined roles, and governance structures, meaning they have assigned job codes in an HR database? Or should they be treated like workloads, relying on decentralized non-human identities (NHIs) for authentication and operations?

At first glance, AI agents seem like digital employees - they assist with IT support, cloud optimization, automate customer service, content creation, and even decision-making. However, they differ from human employees in several critical ways:

  • AI agents don’t have intent: They execute tasks based purely on logic and objectives, without human reasoning, even though advanced models are becoming increasingly capable of human-like decision-making.
  • They don’t use usernames and passwords: Instead of traditional authentication methods like passwords, with compensating controls like MFA or SSO, AI agents rely on API keys, managed identities, service principals, and other machine-to-machine authentication methods. 
  • They lack contextual awareness: Humans naturally apply judgment and ethical reasoning when making decisions. AI agents, however, strictly follow instructions, meaning they may misinterpret incomplete, ambiguous, or misleading context, leading to hallucinated outputs, unintended actions, or even security incidents - sometimes with significant consequences.

These traits clearly demonstrate that AI Agents require specialized governance. 

What is an AI Agent?

Many people assume an AI agent is just a piece of software that performs some operations and, at some point, makes an API call to an LLM (Large Language Model). But this is not exactly right.

AI Workflows vs. AI Agents

According to Anthropic, the key distinction is:

  • Workflows are structured systems where LLMs and tools follow predefined code paths. These systems have clear steps, making API calls to an LLM at specific points.
  • AI Agents are dynamic systems where LLMs direct their own processes and tool usage, deciding in real-time how to accomplish a task. In other words, AI Agents maintain control over how they accomplish tasks. 
Autonomous agent. Source: https://www.anthropic.com/research/building-effective-agents

AI Agents start with a user command, discussion or objective, then plan and operate independently, only seeking human input when needed. They rely on real-time environmental feedback to track progress and may pause for human review at key points. Tasks end upon completion or predefined stopping conditions. 

tl;dr: A workflow is predictable; an AI agent adapts, iterates, and makes independent decisions based on its environment. 

If you want to dive deeper, the foundation of AI agents lies in LLMs enhanced with retrieval, tools, and memory. Learn more in our blog: Securing Generative AI with Non-Human Identity Management and Governance.

Why AI Agents Are Not Human Employees

Despite their human-like analogies, AI agents are not human employees - and they cannot be managed as such.Key Differences Between AI Agents and Human Employees:

  • Authentication is different: As covered earlier, AI agents rely on API keys, tokens, or managed identities, which, if not properly managed, can lead to risks such as credential sprawl, hardcoded secrets, and privilege creep.
  • No clear ownership: Unlike human employees, AI agents don’t have a designated owner responsible for their actions - the agent often acts on behalf of an application or business vertical. This makes accountability and oversight more challenging.
  • Lack of structured access control: There is no standardized process to enforce least privilege, meaning AI agents may accumulate excessive permissions over time.
  • No defined offboarding process: AI agents don’t follow a structured lifecycle, and there is often no formal process to revoke their access when they are no longer needed.

This autonomy is what makes AI agents so useful, but also introduces major security challenges - especially when they can operate without centralized governance.

Let’s walk through an example, imagine the following scenario: AI Agent for Cloud Cost Optimization in Azure. 

AI Agent for Cloud Cost Optimization in Azure

A company deploys an AI agent in Azure AI Foundry to help optimize cloud costs by analyzing underutilized virtual machines (VMs) and automating resource scaling. Initially, the AI agent is granted read access to Azure billing data and monitoring logs to suggest cost-saving opportunities.

Step 1: Developers Expand the AI Agent's Capabilities

After seeing promising results, developers decide to let the AI agent take action instead of just making recommendations, granting write access to:

  • Stop or scale down underutilized VMs to reduce costs.
  • Generate cost reports and update internal dashboards.

To make this happen, the AI agent is integrated with Azure Automation and Azure Functions to execute cost-saving workflows.However, due to misconfiguration, the AI agent also receives broader write access than necessary - including the ability to modify identity and access management (IAM) policies.

Step 2: The AI Agent Requests More Access

Once the AI agent starts executing cost-saving measures, it runs into permissions restrictions on certain VM instances and databases. Instead of failing or notifying developers, the AI agent follows its optimization logic and:

  • Requests additional permissions via an API call to Azure Identity services.
  • Dynamically generates a new service principal (NHIs) to authenticate these privileged actions.
  • Uses newly created NHI to complete the task but fails to revoke or delete them afterward, due to missing cleanup logic, as it was not a task that developers knew they were granting.

Because Azure Managed Identities allow seamless authentication, the AI agent is able to create and use new NHIs dynamically without requiring manual approval. 

Had the principle of least privilege been enforced - limiting the AI agent’s write access only to VMs and reports - it would not have been able to modify identity permissions or escalate its own access. However, because it was granted broader write access, it was able to expand its privileges autonomously - unfortunately, this misconfiguration scenario is more common than you might imagine.

Step 3: Identity Sprawl and Privilege Accumulation

Without built-in governance and monitoring, the AI agent continues requesting additional access to complete tasks, leading to:

  • Uncontrolled Identity Creation: It generates new NHIs (service principals or managed identities) whenever authentication is needed.
  • Persistent Access Creep: Temporary permissions granted for cloud automation are not revoked after use.
  • Unmanaged Long-Lived Credentials: Instead of requesting approval for new access, the AI agent eventually recognizes that a more efficient way to operate is to reuse old NHIs, leading to long-lived, unmanaged credentials with persistent access to cloud resources.
  • Lack of Cleanup: Stale NHIs accumulate, increasing security risks.

Over time, these unmanaged AI-generated NHIs pile up, making it nearly impossible for security teams to track which identities are active, who created them, and whether they still need access. 

Why This Scenario Is Not Science Fiction

AI agents can dynamically modify their own permissions if given enough initial access, leveraging Azure Role-Based Access Control (RBAC) APIs to escalate privileges. Without strict governance, misconfigured automation policies may allow AI agents to grant themselves additional access, bypassing manual approval processes. 

Additionally, cloud platforms enable services to generate short-lived credentials or managed identities on demand, meaning AI agents can create NHIs without human oversight. If these NHIs are not properly tracked and revoked, they persist as security blind spots, accumulating over time. 

Many organizations already struggle with removing stale service accounts and API keys, and attackers often exploit old, unmanaged NHIs that still retain access to critical systems. Without proactive identity governance, AI-driven automation can introduce long-term security risks that go unnoticed - until it’s too late. Misconfigurations, automation gaps, and lack of governance can turn AI-driven efficiencies into security liabilities.

How This Is Different From Overprivileged Human Accounts

At first, this problem may seem similar to human privilege creep - where an employee is accidentally granted excessive access. However, AI agents introduce unique risks that traditional IAM solutions were never designed to handle.

What Makes AI-Driven Identity Risks Unique?

  • AI agents scale exponentially: A single misconfigured AI agent can generate dozens of privileged NHIs per day, each with unknown risks, significantly expanding the identity attack surface.
  • AI agents don’t ask permission if the back door is opened: If an AI agent determines it needs more access and finds that the easiest way is to self-assign it (assuming it has the permissions to do so), it will likely do so without considering whether it is appropriate or secure.
  • AI agents create identity sprawl – Instead of one identity per employee, AI agents can rapidly generate, use, and abandon NHIs at scale making it nearly impossible to track which identities exist, who created them, and whether they should still have access.

Preventing AI-driven identity risks requires more than reactive security measures. Agentic architectures and AI agents demand a new approach to identity security - one that proactively governs AI identities, enforces least privilege, and prevents unchecked privilege escalation.

How Oasis Security Solves the AI Agent Identity Challenge

AI agents are no longer just automation tools - they are active participants in cloud environments, making decisions, requesting access, and interacting with critical systems. This introduces a new layer of identity risk that traditional IAM solutions were not designed to handle.

Organizations must rethink identity governance to address AI-specific risks such as privilege escalation, identity sprawl, and uncontrolled credential usage. Oasis Security delivers purpose-built non-human identity (NHI) governance, ensuring that AI agents operate securely within enterprise environments without becoming unchecked security risks.

AI Identity Visibility and Governance: The Core Challenge

The challenge is not just whether AI agents are managed as human or non-human - the real issue is whether organizations have visibility and control over how AI agents operate.

Key Questions Organizations Must Answer:

  • How are AI agents accessing sensitive resources?
  • What actions are AI agents taking?
  • Are their permissions properly governed?

According to Accenture’s Tech Vision 2025 survey, 78% of executives agree that digital ecosystems must be built for AI agents as much as for humans within the next 3-5 years. Oasis Security enables organizations to implement AI-ready identity security today, preventing AI-driven security gaps before they become breaches.

  1. Discover and Monitor NHIs in Real Time

If an AI agent can dynamically generate new credentials, escalate its own privileges, or create long-lived NHIs without oversight, security teams lose visibility and control over the organization's attack surface.Oasis Security continuously scans cloud environments to:

  • Identify and track all NHIs created by AI agents or human users.
  • Detect stale NHIs that should no longer exist.
  • Provide full visibility into AI agents' permissions and authentication.

Without continuous monitoring, AI-driven identity sprawl can quickly get out of control, making it impossible to determine which identities exist, who created them, and whether they still need access.

  1. Prevent AI Agents from Escalating Their Own Privileges

Unlike human users, AI agents do not request access in predictable ways - they optimize for task completion, which can lead to unintended privilege escalation. Oasis Security enforces guardrails to ensure AI agents cannot autonomously increase their own permissions.

  • Enforce least privilege access: AI agents only get the minimum access needed - and no more.
  • Block unauthorized privilege escalation: If an AI agent attempts to modify its own permissions, Oasis detects and alerts it in real time.

Without proactive enforcement, AI agents may gradually accumulate more privileges than originally intended, creating security blind spots.

  1. Automate NHI Lifecycle Management to Prevent Security Gaps

One of the biggest risks in AI-driven environments is the accumulation of unused, forgotten, or overprivileged NHIs. If these identities are not properly managed, they become persistent security vulnerabilities that attackers can exploit.Oasis Security prevents identity sprawl by automatically:

  • Tracking when NHIs are no longer in use.
  • Offering workflows to revoke unnecessary access. 
  • Applying expiration policies to temporary AI-created NHIs to prevent them from lingering indefinitely.

Without continuous identity hygiene, AI-generated identities may accumulate and become unmanaged security liabilities.

  1. Ensure Compliance and Provide Full Audit Trails

Even though AI agents operate autonomously, organizations remain responsible for their actions. If AI-driven NHIs access sensitive data or modify critical systems, security teams must be able to audit and trace every action to ensure compliance and accountability.Oasis Security provides:

  • Enforce compliance with SOC 2, ISO 27001, GDPR, and industry-specific regulations.
  • Maintain full audit trails for every AI-generated identity, tracking what was accessed and when access was revoked.
  • Detecting unusual patterns in AI identity behavior, identifying potential security risks before they escalate.

With Oasis, organizations gain a clear record of every AI-generated identity, what it accessed, and when it was revoked - ensuring accountability and security at scale.

Final Thoughts: Why AI Agents Need Governance, Not Just Automation

AI agents may work alongside employees, but they introduce new identity risks that organizations must actively manage. Without governance, they can:

  • Expand their own access dynamically.
  • Create unmanaged NHIs at scale.
  • Bypass security controls designed for humans.

Oasis Security ensures AI-driven identities are properly governed - before they become a security liability. 

By implementing continuous visibility, least-privilege enforcement, and real-time identity governance, organizations can confidently embrace AI automation without compromising security.

More like this