
2026 is becoming the year agentic AI grows into the biggest cybersecurity risk for enterprises. Non-human identities — the API keys, service accounts and digital certificates of AI agents — outnumber human identities at a ratio of 50:1. Attackers know this. Does your security strategy?
The adoption of autonomous AI agents in Microsoft 365, Azure and other enterprise platforms is accelerating. At the same time, an attack surface is growing that most security teams have not yet fully mapped: the identity layer of AI agents, also referred to as Non-Human Identities (NHIs).
Every AI agent that accesses enterprise systems needs an identity: an API key, a service account, an OAuth token or a digital certificate. These non-human identities are the credentials that agents use to authenticate with Microsoft Graph, databases, SaaS applications and cloud environments.
Research shows that NHIs now outnumber human identities at a ratio of 50 to 1, projected to reach 80 to 1 within two years. At the same time, traditional IAM (Identity and Access Management) tools were designed for humans: they lack the visibility and control needed for autonomously operating agents.
Attackers no longer target only human users. A compromised internal agent can authorise financial transactions on behalf of a CFO within internal systems — without any human phishing required. Another approach: injecting malicious instructions into documents processed by a Copilot agent (prompt injection), causing the agent to execute unintended actions.
According to CyberArk research, 97 per cent of AI-related data breaches are attributable to poor access management of NHIs. Barracuda Networks reports that 48 per cent of security professionals identify agentic AI as the number one attack vector for 2026.
Gartner describes four forces that together amplify the risk: (1) the rapid operationalisation of agentic AI in production environments, (2) identity as the primary exploitation vector, (3) drastically compressed exploitation windows — attacks complete in minutes rather than hours — and (4) evolved extortion tactics that combine data exfiltration with operational disruption.
On 20 March 2026, Microsoft published its Secure Agentic AI framework, providing guidance on agent identity attestation, least-privilege agent scopes and behaviour-based anomaly detection for agent activity. Agent 365, becoming generally available on 1 May 2026, provides a management layer for non-human agent identities including activity logs and policy enforcement.
Security teams can start today: (1) Create an inventory of all NHIs in your environment — API keys, service accounts and OAuth tokens. (2) Apply Zero Trust principles to agents: every agent should have minimal privileges and every action must be verifiable. (3) Implement behaviour-based anomaly detection for agent activity alongside traditional user monitoring. (4) Prepare for Agent 365 as a governance layer for AI agent identities in Microsoft 365.
Zarioh Digital Solutions guides organisations in securing their Microsoft 365 environment in the age of agentic AI. Get in touch for a security scan of your agent governance.