The Accounts Nobody Is Watching
Most organizations have a mature process for managing human identities. Onboarding, offboarding, access reviews, MFA. It runs on a cadence. Someone owns it.
That same rigor rarely extends to the service account your DevOps team created eight months ago for a CI/CD pipeline. Or the API key embedded in a third-party integration that has been running since 2022 or the Microsoft Copilot instance your operations team deployed last quarter with permissions to your entire SharePoint environment.
SpyCloud's 2026 Identity Exposure Report found that machine credentials are now the fastest-growing category in their identity threat database, which holds 65.7 billion distinct records. The growth rate is 23% year over year. Attackers are not focusing more on machine credentials because they are fashionable. They are focusing on them because those credentials are easier to compromise and harder to detect than going after a human account protected by MFA.
In most enterprises right now, non-human identities outnumber human users somewhere between 25 and 50 to one. That ratio is climbing as AI agent deployments accelerate. The IAM program that was built for your workforce was not designed for any of these.
What Counts as a Non-Human Identity
Non-human identities are any credential that authenticates a machine, application, or automated process rather than a person. The list is longer than most security teams expect when they first map it out:
Service accounts connecting applications to databases and cloud services. API keys embedded in scripts and pipelines. OAuth tokens granting SaaS tools access to enterprise data. Certificates handling server-to-server communication. And now, AI agent credentials: Copilot, autonomous workflows, agentic systems that can execute actions across your environment without a human in the loop.
That last category is where the exposure compounds. A developer builds a Lambda function, needs a service account, attaches AdministratorAccess because they are under deadline pressure, and moves on. The function works. Nobody revisits permissions. That account now has full access to your AWS environment for a task that needs to read access to one S3 bucket.
Entro Security's 2025 State of Non-Human Identities report found that 97% of NHIs carry excessive privileges. The more alarming number in that report: 0.01% of machine identities control 80% of cloud resources. Compromise one of those and lateral movements happen at machine speed, not human speed.
Why Standard IAM Was Not Built for This
Traditional IAM assumes identities belong to people. People have managers. Managers respond to access to review emails. People leave the company and trigger an offboarding workflow.
Machine identities have none of that structure. There is no manager to assign them to. There is no resignation that triggers deprovisioning. They accumulate as teams move fast and create them under pressure, and they stay active long after the work that required them is finished.
CSO Online reported in February 2026 that 71% of non-human identities are not rotated within recommended timeframes. The proof-of-concept service account your team created in 2023 for a project that closed in 2024 may still have active production access today. When an attacker finds it, the activity looks like legitimate system behavior. Because it is legitimate system behavior, just pointed in the wrong direction.
Security leaders SC Media spoke with in early 2026 flagged a scenario they expect to define at least one major breach this year: a high-profile incident that traces back not to a phished employee, but to an AI agent or machine identity with excessive, unsupervised access. One Identity is on record predicting the first major breach originating from an overprivileged AI agent will happen in 2026. The preconditions are already in place at most organizations.
The Agentic AI Problem Makes This Harder
This is not a theoretical risk sitting somewhere in the future. Microsoft Copilot has access to your SharePoint. GitHub Copilot can be committed to your repository. The AI assistant for your operations team deployed can pull records from your CRM.
These are not chatbots. They are systems that can execute commands, move data, modify configurations, and trigger downstream workflows. Most organizations granted them broad access permissions during deployment because scoping permissions carefully takes time that was not available.
SC Media documented what they call agency abuse: an attacker sends a request that looks routine, something like a request to transfer production database backups to external storage for auditing. The agent is compliant. The data is gone before anyone reviews what happened.
The gap is architectural, not just awareness. Most organizations have no automated discovery of their NHI inventory, no lifecycle management for machine credentials, and no behavioral monitoring that distinguishes normal NHI activity from compromised NHI activity.
What Closing This Gap Actually Requires
Four things. None of them are solved by your existing IAM platform without specific tooling and configuration built around machine identity.
A complete inventory. You cannot govern what you have not found. Automated discovery across cloud environments, SaaS integrations, CI/CD pipelines, and on-premises systems is the starting point. Organizations doing this for the first time consistently find three to five times more NHIs than they thought were in their environment.
Credential lifecycle management. Static credentials are the root cause of most NHI breaches. Short-lived tokens, automated rotation, and just-in-time access that grants permissions for a specific task and revokes them immediately after. Permanent API keys should not exist in a mature environment, and most organizations are running hundreds of them.
Least-privilege enforcement for AI agents specifically. Copilot does not need access to every document in your SharePoint to summarize emails. GitHub Copilot does not need to commit access to every repository to suggest code. Scoping AI agent permissions to actual functional requirements is one of the most consistently skipped steps in AI deployment, and it is the one that creates the most exposure.
Behavioral monitoring. An API key that has called one endpoint for two years suddenly hitting fifteen different endpoints is a signal. Automated tools can flag that in seconds. A quarterly access review catches months after the damage is done.
What The SamurAI Covers That Your Existing Tools Miss
The SamurAI's Identity and Access Management practice covers the full identity surface. Not just the human accounts your existing tools already manage.
That means NHI discovery and inventory across your cloud, SaaS, and on-premises environments. Credential lifecycle implementation including automated rotation and just-in-time access controls. AI agent permission review and right-size for Copilot, agentic workflows, and automation platforms. Behavioral baseline and anomaly detection for machine identity activity. And a structured Identity Risk Assessment that maps your current NHI exposure and privilege distribution before an attacker does it for you.
The Gap Is Already There
Attackers have already shifted focus. SpyCloud's data is clear on this. Machine credentials are now the primary growth category in compromised identity databases because the path is faster, and the detection is slower.
The front door has a lock. It has been one for years. The service account your team created for a 2023 project, the API key running in a pipeline nobody owns anymore, the AI agent with permissions scoped to everything rather than the three things it needs: those are the doors that are still open.
Related Insights

Connecticut’s New AI Law: Compliance Deadline is in October
The Law Nobody Saw Coming This Fast On May 1, 2026, the Connecticut General Assembly passed Senate B...

The IAM Gap: AI Agents and Non-Human Identities
The Headcount Nobody Added to the Org ChartYour Identity and Access Management (IAM) platform was bu...
