Zero Trust Architecture Proves Essential for AI Security Challenges
TL;DR: Security expert Duncan Greatwood argues that established Zero Trust principles provide the proven framework needed to secure LLMs and AI agents. Identity-based authorization and continuous verification prevent AI systems from accessing sensitive data without proper controls, addressing risks that multiply at machine speed and scale.
Duncan Greatwood, CEO of Xage Security, has argued that Zero Trust architecture—the “tried and true” cybersecurity framework—is now more essential than ever for securing AI systems. As organisations deploy LLMs and autonomous AI agents with access to enterprise data, the multiplication of potential exposure pathways demands identity-based safeguards at every interaction.
Context and Background
The fundamental security challenge remains unchanged by AI, but the scale and complexity have increased dramatically. LLMs excel at processing vast data volumes, yet every interaction between users, agents, models, and databases creates new risk pathways. A single unmanaged access point that might expose a handful of records in traditional systems could, when exploited by an AI agent, expose millions of sensitive data points within seconds.
AI agents operate autonomously, chaining actions together and orchestrating workflows across multiple systems—activities that blur traditional security perimeters. Prompt injection attacks, where malicious inputs trick LLMs into revealing sensitive data, have proven capable of bypassing even advanced filtering systems. Static defences no longer suffice when AI components can rapidly escalate access through sophisticated prompt manipulation.
Zero Trust addresses these challenges by treating every user, device, application, and AI agent as requiring continuous verification. Rather than relying solely on prompt or output filtering, Zero Trust enforces security deeper in the stack—governing which agents and models can access which data, under specific conditions, and for defined durations.
Looking Forward
Practical implementation requires treating AI processes like human users, assigning each agent or model its own identity, roles, and entitlements. Fine-grained, context-aware controls limit access based on real-time factors, whilst protocol-level enforcement blocks unauthorised access regardless of prompt sophistication. Continuous monitoring and tamperproof logging ensure compliance and accountability across complex interaction chains.
Organisations that can confidently deploy AI whilst safeguarding data will innovate faster and maintain regulatory compliance as AI governance frameworks evolve globally. Zero Trust provides the proven security model that enables AI benefits without unacceptable risks.
Source Attribution:
- Source: TechRadar
- Original: https://www.techradar.com/pro/zero-trust-a-proven-solution-for-the-new-ai-security-challenge
- Published: 7 October 2025