TL;DR

With 89% of organisations running or piloting AI workloads, Tenable research warns of an “AI exposure gap” where security practices fail to keep pace with AI adoption. One in three AI adopters have experienced breaches, but these stem from familiar exposures—software vulnerabilities, insider threats, and misconfigurations—rather than sophisticated model attacks. Only 22% fully classify and encrypt AI data.

Security Practices Lag Behind AI Deployment

Tenable’s research reveals that only 22% of surveyed organisations fully classify and encrypt AI data, leaving 78%—or four in five—with data accessible in the event of an attack. This fundamental security gap exists even as organisations rush to deploy AI capabilities.

Software vulnerabilities (21%) and insider threats (18%) were among the top three causes of breaches, though AI model flaws (19%) also pose risks. “The real risks come from familiar exposures—identity, misconfigurations, vulnerabilities—not science-fiction scenarios,” explained Liat Hayun, Product and Research VP at Tenable.

Reactive Defence Posture

The exposure gap stems from enterprises scaling AI faster than they can secure it, leaving visibility across systems fragmented. Companies tend to employ reactive defences to pick up pieces after incidents rather than securing systems ahead of attacks.

Currently, around half (51%) rely on the NIST AI Risk Management Framework or the EU AI Act to guide their strategies, suggesting they may only be doing the bare minimum for compliance. Only one in four (26%) conduct AI-specific security testing like red-teaming.

Foundational Controls Over Compliance Checkboxes

Tenable advises companies to prioritise foundational controls including identity governance, misconfiguration monitoring, workload hardening, and access management. The recommendation positions compliance as the starting point for strong security posture—not the end goal.

This approach addresses the root causes of the breach patterns observed: familiar infrastructure and process failures rather than novel AI-specific attack vectors. Whilst model attacks may emerge as AI systems become more sophisticated, current threats exploit traditional security weaknesses amplified by rapid AI deployment.

Looking Forward

The AI exposure gap highlights a recurring pattern in enterprise technology adoption: enthusiasm for capability deployment outpacing investment in security fundamentals. Organisations that fail to address this gap risk experiencing breaches that undermine confidence in AI initiatives and trigger regulatory scrutiny.

The challenge is compounded by the pace of AI advancement. As capabilities expand and AI systems handle increasingly sensitive data and critical decisions, the consequences of inadequate security controls will escalate. Organisations must close the exposure gap now—whilst threats remain familiar—rather than waiting for more sophisticated model-specific attacks to emerge.

Success requires shifting from compliance-driven minimum standards to comprehensive security architecture that treats AI workloads with the same rigour as other critical enterprise systems. This means implementing proper data classification and encryption, conducting regular security testing, and maintaining visibility across AI deployments—fundamentals that shouldn’t be novel requirements in 2025.


Source: TechRadar

Share this article