TL;DR: Over two-thirds (68%) of corporate executives violated their own AI usage policies in the past three months, according to Nitro’s survey of 1,003 business leaders and staff. Half of all employees also admit to unapproved AI tool usage. Despite 97% of companies investing over $1 million in AI—and 61% investing over $10 million—approved enterprise tools lose to consumer AI on speed, simplicity and user experience.

The C-Suite Compliance Gap

Nitro’s “Enterprise AI: The Reality Behind the Hype” survey reveals a striking disconnect: over half of corporate leaders identify security and compliance as AI’s greatest implementation challenge, yet more than two-thirds routinely violate their own policies. The findings suggest executives bet that AI’s competitive advantages outweigh security risks, according to Nitro CEO Cormac Whelan.

Employees follow their leaders’ example, with half admitting to unapproved AI tool usage. This “shadow IT” phenomenon—unauthorised software adoption within organisations—prompted Microsoft to recently shift from prevention to management strategies.

Whelan advocates a similar approach: “Instead of fighting shadow AI adoption, IT leaders should focus on establishing guardrails rather than roadblocks and providing secure alternatives that executives will actually use.”

Investment Outpaces Governance

Corporate AI investment has surged ahead of planning and governance frameworks. Nitro’s survey data shows:

  • 97% of companies invested over $1 million in AI to date
  • 61% invested more than $10 million
  • 70% plan to invest over $10 million in the next 12 months

Yet adoption pressure remains minimal: 57% of respondents report low or no pressure to use AI tools, whilst 28% feel zero pressure. A significant training perception gap exists between C-suite and staff—89% of executives rate company AI training as excellent or good, compared to just 63% of employees.

Gartner data suggests investment alone delivers limited value. In EMEA markets, 73% of CIOs report organisations breaking even or losing money on AI investments. For every AI tool purchased, organisations should anticipate ten hidden costs plus training and change management transition expenses.

Security and Trust Divide

A stark gap separates executive confidence from employee concerns. Eighty-two percent of executives believe their AI tools meet security and compliance requirements—even whilst flouting their own corporate rules. Only 55% of employees share this confidence, potentially because 33% of employees have processed confidential corporate data using AI tools.

This disconnect highlights material security risks from policy violations at both leadership and staff levels.

The User Experience Problem

Whelan frames the shadow AI crisis as an adoption challenge starting at the top: “When 68 percent bypass the tools they’ve invested millions in, they are sending the same message as employees: approved tools can’t help them get their work done.”

The root cause: most enterprise AI platforms lose to consumer AI applications on critical dimensions—speed, simplicity, and user experience. Whelan characterises this as “a wake-up call that adoption is earned, not mandated.”

C-suite executives, having built careers on finding workarounds, apply the same approach to AI tools when approved solutions fail to meet productivity needs. This behaviour sets organisational tone, signalling to employees that policy violations are acceptable when approved tools prove inadequate.

Survey Methodology

Nitro commissioned the study through Zogby Analytics and Pollfish, surveying 103 C-suite executives and 900 managers and individual contributors from the US, UK, and Canada during October 2025. Respondents worked in professional services (44%), financial services (33%), manufacturing (21%), legal sector, and other industries.

The findings illuminate shadow IT extent, spending patterns, and the fundamental disconnect between enterprise leadership views and staff actions regarding AI adoption and governance.


Source: The Register

Share this article