On February 5, 2026, the National Institute of Standards and Technology (“NIST”) released a concept paper, “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization.” The paper proposes a demonstration to explore how identity and authorization practices can be applied to artificial intelligence (“AI”) agents in enterprise settings. The demonstration aims to produce a practical guide, developed in National Cybersecurity Center of Excellence (“NCCoE”) labs with commercially available technologies, showing implementation approaches and lessons learned. NIST seeks public input to shape the project, describe real-world challenges and solutions, and guide future standards. Comments are due April 2.
Background
For over a decade, enterprises have relied on code-based systems to automate workflows, manage cloud workloads, and deploy APIs. AI agents take automation to a new scale, delivering productivity, efficiency, and smarter decision-making. AI agents are increasingly critical to enterprise functions, including workforce efficiency and security. But they also introduce new risks. Granting AI agents access to sensitive data and critical systems without robust controls can lead to misuse, errors, or even breaches. Applying identity principles—identification, authentication, and authorization—can help ensure that AI agents are trusted, accountable, and operating within intended limits.
The NIST demonstration
NIST’s NCCoE is planning a demonstration project to evaluate how identity and authorization practices can apply to AI agents in enterprise environments. The project will focus on internal enterprise agents where organizations maintain control and visibility over agents and the systems they access.
Potential topics include:
- Identification: Distinguishing AI agents from human users and managing metadata to control the range of agent actions (e.g., human-in-the-loop approval to autonomous action).
- Authorization: Applying standards, such as OAuth 2.0, extensions, and policy-based access control mechanisms, to define and enforce AI agent rights and entitlements.
- Access Delegation: Linking user identities to AI agents to maintain accountability and oversight.
- Logging and Transparency: Linking specific AI agent actions to their non-human entity to enable effective visibility into system activity.
- Tracking Data Flows: Maintaining provenance of user prompts and data sources to support risk assessment and policy decisions of AI agent actions.
The project is not aimed at identifying and managing access for external agents from untrusted sources, but future iterations of the project may expand to address these scenarios.
Standards and best practices under consideration include Model Context Protocol; OAuth 2.0/2.1 and extensions; OpenID Connect; SPIFFE/SPIRE; System for Cross-domain Identity Management; and Next Generation Access Control. NIST will also apply relevant guidelines from SP 800-207 Zero Trust Architecture; SP 800-63-4 Digital Identity Guidelines; NISTIR 8587 Protecting Tokens and Assertions from Forgery, Theft, and Misuse; and other best practices and standards.
Industry feedback requested
NIST seeks industry input to guide the design and scope of its demonstration, including:
- Use Cases and Opportunities: What AI agent use cases exist now or are anticipated? What benefits and risks do these agents create? How do agentic architectures differ from traditional software or microservices?
- Identification: How should agents be recognized in enterprise architectures? What metadata is essential for an AI agent’s identity? Should identities be ephemeral (e.g., task-specific) or fixed? Should identities be tied to hardware, software, or organizational boundaries?
- Authentication: What constitutes strong authentication for AI agents, and how should key management (e.g., issuance, updates, revocation) be handled?
- Authorization: How can zero-trust principles be applied to agent authorization? Can authorization policies be dynamically updated when an agent context changes? How is least privilege enforced when agent behavior may be unpredictable? How can agents prove authority for actions?
- Auditing and Non-Repudiation: How can organizations ensure that agents log actions and intent in tamper-proof, verifiable ways? How can organizations ensure accountability is maintained for agent actions tied to human approval?
- Prompt Injection Prevention and Mitigation: What controls prevent both direct and indirect prompt injections, and how can their impact be minimized if they occur?
Recommendations for industry stakeholders
Entities developing or deploying AI agents should consider submitting comments to inform the NIST demonstration. The comment period is a key opportunity to shape emerging standards and influence guidance on AI agent risk management, operational controls, and future compliance expectations.
- AI Developers: Explain how current and next-generation AI agents address identity, authentication, and authorization. Discuss metadata, least-privilege controls for unpredictable behaviors, auditing practices, and prompt injection controls.
- AI Deployer: Describe AI agent use cases, risks, and operational benefits. Explain how AI agents are integrated into enterprises, including identity, authorization, and audit practices.
- Other Stakeholders: Provide insight on ecosystem-wide standards, interoperability, and platform-level controls relevant to AI agent security.
[View source.]