TL;DR:

  • Two in five organisations cite data loss via GenAI tools as top concern
  • 38% flag unsupervised data access by AI agents as critical threat, with 54% lacking sufficient visibility and controls
  • 66% of data loss events attributed to careless employees and third-party contractors, with just 1% of users responsible for 76% of events

As businesses rush to deploy generative AI and AI agents, they face the same risks that accompany rapid implementation of any new technology: data spills, leaks, and breaches. A new Proofpoint report argues “agentic workspaces” represent a new class of insider risk rivaling human error in severity.

Dual Concerns: Human and Agent Activity

Two in five respondent organisations identified data loss via public or enterprise GenAI tools as a top concern, whilst over a third worry about sensitive data being used in AI training. The situation worsens when AI agents operate as privileged superusers.

More than a third (38%) flagged unsupervised data access by AI agents as a “critical threat,” and 54% reported lacking sufficient visibility and controls over GenAI tools. In essence, AI agents are being left to their own devices—creating significant security exposure.

Despite AI-specific concerns, humans continue to represent the primary vulnerability. Two thirds (66%) of organisations attributed their most significant data loss events to “careless employees” and third-party contractors, whilst 31% cited compromised users and 33% pointed to malicious insiders.

Behavioural analysis reveals that just 1% of users are responsible for 76% of data loss events, emphasising the importance of behaviour-aware, adaptive security strategies rather than blanket controls.

Behavioural Cybersecurity as Solution

Ryan Kalember, chief strategy officer at Proofpoint, stated: “We’ve entered a new era of data security where insider threats, relentless data growth and AI-driven change are testing the limits of traditional defences. Fragmented tools and limited visibility leave organisations exposed.”

Kalember argued that “the future of data protection depends on unified, AI-powered solutions that understand content and context, adapt in real time and secure information across both human and agent activity.”

AI-Enhanced Security Deployment

To mitigate threats from both humans and AI agents, Proofpoint advises pivoting towards behavioural cybersecurity and analytics. Apparently, two-thirds (65%) of organisations have already deployed AI-enhanced data security capabilities.

This approach focuses on identifying and monitoring high-risk users and agent behaviours rather than attempting to secure all activity equally—a recognition that effective security must prioritise resources where risk concentrates.

The research underscores that whilst AI agents promise productivity gains, organisations must implement robust visibility, access controls, and behavioural monitoring before deployment to avoid creating new attack surfaces that threat actors can exploit.


Source: TechRadar

Share this article