TL;DR: Consumer-grade AI guardrails that apply uniform restrictions fail enterprise requirements, according to Knostic co-founder. Persona-based access controls (PBAC) enable AI systems to tailor responses based on user roles, clearance levels, and context—providing appropriate information access whilst maintaining compliance and audit trails aligned with emerging regulations like the EU AI Act.
The proliferation of AI tools in enterprise settings has exposed fundamental limitations in consumer-oriented safety mechanisms. Whilst high-profile incidents—including reports of ChatGPT providing harmful advice to teenagers—have prompted sweeping content restrictions, these “clumsy one-size-fits-all” approaches create friction in workplace contexts where information sensitivity varies dramatically by role and responsibility.
Role Context Determines Information Risk
The problem manifests clearly in enterprise scenarios: a compliance officer may legitimately need to reference regulatory guidance on insider trading, yet that same information could pose risks if accessible to junior employees. Similarly, developers require deep access to IT documentation that business users should not encounter through casual prompts.
Consumer-style content filters that “treat everyone the same” cannot address these nuanced requirements. Instead, enterprises require AI systems that recognise roles, responsibilities, and legal boundaries—an approach the Knostic co-founder terms persona-based access controls (PBAC).
PBAC vs. Traditional Role-Based Access
PBAC extends beyond traditional role-based access control (RBAC) commonly used in cybersecurity. Whilst RBAC determines which systems and files a role can access, PBAC addresses a distinct question: what knowledge should this persona see in this specific context, and what should be filtered out?
The distinction becomes evident in a practical example: both an HR manager and HR software analyst might access the same employee record system via RBAC. When each asks an AI assistant to “summarise absentee trends for the past six months,” RBAC-only systems would return identical responses—potentially exposing Protected Health Information (PHI) and creating compliance risks.
Under PBAC, the AI assistant adapts responses to each persona. The HR leader receives an anonymised detailed summary (“Region A saw a 17% increase in medical leave, primarily among customer-facing roles, with stress-related absences being the most common category”), whilst the data analyst receives a higher-level trend report without medical context (“Absenteeism increased by 7% quarter over quarter, with the largest increase occurring in the operations department”).
Governance Through Auditability
Several internet security vendors are testing PBAC implementations that sit between large language models and end users, enforcing policies mapped to compliance, privacy, and security requirements. These systems function as “content firewalls,” filtering not only toxicity but also topics posing business risk based on user personas.
Crucially, PBAC enables governance through audit trails documenting what content was restricted, under which policy, and for whom. This auditability aligns with emerging AI regulations including the EU AI Act and NIST AI Risk Management Framework, which push enterprises toward traceable and transparent governance.
The co-founder argues that as AI embeds deeper into business workflows, consumer-grade guardrails become insufficient. “Overly restrictive, one-size-fits-all controls stifle innovation and frustrate legitimate work, while loose or non-existent safeguards invite regulatory and reputational risk.” PBAC represents the evolution toward nuanced, context-aware controls that align with enterprise access management principles—making AI governance “look a lot less like parental controls, and a lot more like cybersecurity.”
Source: TechRadar Pro