The Largest AI Security Risks Aren’t in Code—They’re in Culture
TL;DR: A resilient AI system reflects the culture of the team behind it, argues Darren Lewis, Senior Lead at Plexal. Most AI risks accumulate not from flaws in logic but from unclear ownership, unmanaged updates, and fragmented decision-making—requiring cultural clarity as the primary control mechanism.
As AI becomes more embedded in businesses and used by employees and the public, the systems we rely on are becoming harder to govern. The risks AI introduces aren’t usually dramatic or sudden—they emerge gradually through coordination gaps rather than code failures.
Reframing AI Security
When AI security is discussed, focus tends to sit on the technical layer: clean datasets, robust algorithms, and well-structured models. These are visible, tangible components that matter.
“But in practice, most risks accumulate not from flaws in logic but from gaps in coordination,” Lewis argues. “They tend to build slowly when updates aren’t logged, when models move between teams without context, or when no one is quite sure who made the last change.”
The UK’s Cyber Security and Resilience Bill introduces new requirements for operational assurance, continuous monitoring, and incident response—especially for service providers supporting critical systems. However, whilst the Bill sharpens expectations around infrastructure, it has yet to fully capture how AI is actually developed and maintained in practice.
Where Risk Tends to Build Up
AI development rarely stays within a single team. Models are retrained, reused, and adapted as needs shift. That flexibility is part of their value, but it also adds layers of complexity.
Small changes can have wide-reaching effects:
- One team might update training data to reflect new inputs
- Another might adjust a threshold to reduce false positives
- A third might deploy a model without checking how it was configured before
“None of these decisions are inherently wrong,” Lewis notes. “But when teams can’t trace a decision back to its origin, or no one is sure who approved a change, the ability to respond quickly is lost.”
These aren’t faults of code or architecture—they’re signs that the way teams build, adapt, and hand over systems hasn’t kept pace with how widely those systems are now used.
Turning Culture Into a Control Surface
If risk accumulates in day-to-day habits, resilience must be built in the same place. Culture becomes more than an enabler of good practice—it becomes a mechanism for maintaining control as systems scale.
That principle is reflected in regulation:
- EU AI Act: Sets requirements for high-risk systems, including conformity assessments and voluntary codes of practice
- UK DSIT AI Cyber Security Code of Practice: Pairs high-level principles with practical guidance that helps businesses turn policy into working norms
Research initiatives like the UK’s LASR initiative show how communication, handovers, and assumptions between teams shape trust as much as the models themselves. The National AI Awards highlight organisations putting cultural governance into practice and establishing clearer standards of maturity.
Practical Implementation
For businesses, the task is to make cultural clarity a more integrated part of operational design. The more that teams can rely on shared norms, visible ownership, and consistent decision-making, the more resilient their AI systems will become over time.
This means:
- Clear ownership: Every model has a designated owner responsible for changes
- Visible change logs: All updates are documented with context and approval
- Shared norms: Teams operate with consistent standards and decision-making frameworks
- Regular reviews: Cultural governance is assessed and improved continuously
Looking Ahead
As AI becomes part of everyday decision-making, leadership focus must shift from individual model performance to the wider environment those systems operate in.
“That means moving beyond project-level fixes and investing in the connective tissue between teams—the routines, forums, and habits that give AI development the structure to scale safely,” Lewis explains.
Building that maturity takes time, but it starts with clarity: clarity of ownership, of change, and of context.
The Bottom Line
“The organisations that make progress will be those that treat culture not as a soft skill, but as a working asset—something to be reviewed, resourced, and continuously improved,” Lewis concludes.
This cultural structure is what will ultimately shape security through embedded habits that make risk easier to see, surface, and act on as AI becomes more pivotal to how today’s businesses operate.
The message is clear: whilst technical safeguards matter, the largest AI security risks accumulate in the gaps between teams, the unclear ownership, and the untracked changes—all cultural issues that require cultural solutions.
Source Attribution:
- Source: TechRadar
- Original: https://www.techradar.com/pro/the-largest-ai-security-risks-arent-in-code-theyre-in-culture
- Author: Darren Lewis (Senior Lead at Plexal, National AI Awards Advisory Board Member)
- Published: November 2025