Utilising Workday’s Agent System for secure AI management
Building Secure AI Agent Workflows with Microsoft Azure and Workday
Workday and Microsoft are building a practical bridge between HR-grade records and cloud identity for AI agents. The partnership links Workday’s Agent System of Record to Microsoft identity and AI capabilities. That link makes it possible to register an agent in Workday and give it an ID that Microsoft recognises. The public reporting on the collaboration gives the core facts: agents can be recorded in Workday and tied to Microsoft Entra identities and Azure AI tools source. Workday’s announcements add detail on product-level integration and the intention to make agents manageable alongside people source.
The immediate aim is simple. Make AI agents first-class, trackable entities. That means an Agent System of Record entry, a verifiable identity, and governance hooks. The next aim is operational, move agent onboarding, role assignment and lifecycle events into existing HR and IT workflows. For me the practical benefit is clearer audit trails and fewer shadow agents running with unchecked privileges.
This changes how I design AI Agent Workflows. I can treat an agent like any other controlled actor. That means enforcing permissions via Azure identity controls, mapping agent roles to Workday attributes, and recording actions for later review. It also makes it easier to enforce separation between experimental agents and production agents.
Expect HR platform owners, identity and access engineers, AI platform leads, and compliance functions to be involved. Each group has a concrete deliverable, HR owners manage the Agent System of Record entries; identity teams issue and vet Entra Agent IDs; AI platform owners manage model access in Azure; compliance defines retention and logging rules.
Future developments
The partnership will likely add tighter APIs between Workday and Microsoft tools, more metadata fields for agents, and richer lifecycle events. That will let me automate common lifecycle tasks like suspending an agent’s identity after a failed audit or rotating keys when a model version changes.
Secure management of AI agents
Permissions management is the hardest security control to get right. Agents will need least-privilege scopes for data access, service calls and downstream systems. In practice I create role templates in Azure for common agent types, read-only analyst, workflow orchestrator, and transaction processor. Then I map those templates to Workday roles so provisioning can be automated. That keeps human mistakes out of agent privileges and reduces blast radius when an agent behaves unexpectedly.
Identity verification gives an agent a cryptographic and organisational anchor. I use Microsoft Entra Agent ID for authentication and Workday’s Agent System of Record for the organisational anchor. That dual approach lets me assert that an identity belongs to a registered agent and that the agent’s attributes, owner, purpose, allowed datasets are stored in a single authoritative record. That combination supports incident response and forensic work when something goes wrong.
Best practices for secure workflows
- Define a registration workflow. Require documented purpose, owner, and data access justification before issuing an Entra Agent ID. Record all fields in the Agent System of Record.
- Automate provisioning. Use Azure role templates and Workday events to tie identity issuance to approvals. That avoids manual credential handoffs.
- Enforce least privilege. Grant the smallest set of permissions needed and time-box sensitive rights.
- Log everything. Record agent actions in central logging, and keep links back to the agent record for traceability.
- Version-control agent artefacts. Treat prompts, chain logic and connectors as code and keep them in a registry that references the agent ID.
Challenges in implementation
There are concrete friction points. Mapping Workday attributes to Azure policies is rarely one-to-one. Identity lifecycles differ, HR events are human-centric and slower than CI/CD-driven model updates. I also see governance gaps around third-party agents supplied by vendors, which may not be manageable via internal identity controls. Those third-party agents need clear contract terms and token-exchange patterns. Finally, telemetry and observability for agent behaviour is immature in many stacks. Expect to build custom logging until vendors ship better primitives.
Treat agents as governed identities from day one. Use Workday’s Agent System of Record as the single source for who an agent is and why it exists. Use Microsoft Azure and Entra to enforce access and operational controls. Automate the handoffs between the two systems so approvals, provisioning and revocation are repeatable.
That combination makes AI Agent Workflows auditable, revocable and safe to run at scale.
0 Comment