TL;DR:
- Agentic AI systems operate autonomously with less oversight, creating new security vulnerabilities
- Traditional access control models struggle with generalist agents having broad permissions across systems
- Zero Trust principles require reimagining: treat agents as separate identities with segmented, time-bound permissions
Just three years after ChatGPT launched to popular acclaim, the industry is looking ahead to the next wave: agentic AI. With greater autonomy comes greater risk—and the challenge for corporate IT and security teams will be empowering users whilst managing a new breed of threats.
The Autonomy-Risk Equation
Agentic AI systems offer a major leap forward from generative AI chatbots. Whilst the latter create and summarize content reactively based on prompts, the former proactively plan, reason, and act autonomously to complete complex, multi-stage tasks. Agentic AI can even adjust plans on the fly when presented with new information.
Gartner predicts that by 2029, agentic AI will “autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs.”
However, the same capabilities creating productivity benefits should cause concern. Less oversight means malicious actors could attack and subvert agent actions without raising red flags. The capability to make irreversible decisions—deleting files or sending emails to incorrect recipients—increases potential damage without adequate safeguards.
Access Control Challenges
Tackling these challenges starts with identity and access management (IAM). Organisations effectively creating a digital workforce comprised of agents need to manage identities, credentials, and permissions for that workforce.
Yet most agents today are generalists rather than specialists. ChatGPT agent exemplifies this: it can schedule meetings, send emails, interact with websites, and much more. This flexibility makes it powerful—but also harder to apply traditional access control models built around human roles with clear responsibilities.
If a generalist agent was manipulated via an indirect prompt injection attack, its overly permissive access rights could give an attacker broad access to sensitive systems.
Zero Trust Reimagined for Agentic AI
Zero Trust in an agentic AI environment requires fundamental shifts. First, assume agents will perform unintended and difficult-to-predict actions—something even OpenAI acknowledges. Stop thinking of AI agents as extensions of existing user accounts; instead, treat them as separate identities with their own credentials and permissions.
Access management should be enforced both at the agent level and at the tool level—governing which resources agents can access. Fine-grained controls ensure permissions remain aligned with each task through “segmentation”—restricting agent permissions so they can only access systems and data needed for their job, and no more.
Human Oversight as Second Factor
Traditional multifactor authentication doesn’t translate well to agents. If an agent is compromised, asking it for a second factor adds little security. Instead, human oversight can serve as a second layer of verification, especially for high-risk actions.
This must be balanced against consent fatigue: if agents trigger too many confirmations, users may start approving actions reflexively—defeating the security purpose.
Visibility and Monitoring
Organisations need visibility into agent actions. Setting up systems for logging agent activities and monitoring for unusual behaviour mirrors a key Zero Trust element and proves essential for both security and accountability.
It’s still early days for agentic AI. But if organisations want to embrace the technology’s ability to take actions with minimal oversight, they need confidence that risk is appropriately managed. The best way to achieve that is by never trusting anything by default.
Source: TechRadar