Responsible AI Governance Linked to Better Business Outcomes

TL;DR:

  • Companies with advanced responsible AI governance report measurable gains in revenue growth (34% improvement), cost savings (65% improvement), and employee satisfaction
  • 99% of organisations surveyed experienced financial losses from AI-related risks, averaging £3.4 million per company
  • Significant gaps persist in C-suite knowledge, governance frameworks, and oversight of employee-led AI adoption

The EY organisation has released findings from its second Responsible AI Pulse survey, indicating that companies implementing advanced responsible AI measures are pulling ahead whilst others stall. The research, conducted across 975 C-suite leaders from 21 countries, reveals a growing divide between organisations with robust AI governance and those without.

Context and Background

Nearly four in five respondents reported improvements in innovation (81%) and efficiency gains (79%), whilst about half saw boosts in revenue growth (54%), cost savings (48%), and employee satisfaction (56%). The survey demonstrates that organisations with real-time monitoring capabilities are 34% more likely to see improvements in revenue growth and 65% more likely to achieve cost savings compared to those without such measures.

On average, organisations have already implemented seven of the 10 responsible AI measures examined in the survey. Across all measures, fewer than 2% of respondents reported having no plans for implementation, pointing to broad engagement with responsible AI principles and strong intent to continue progressing on their governance journey.

However, the research also highlights significant challenges. Almost all organisations (99%) reported financial losses from AI-related risks, with nearly two-thirds (64%) suffering losses exceeding £770,000. The most common AI risks identified were non-compliance with AI regulations (57%), negative impacts on sustainability goals (55%), and biased outputs (53%).

Looking Forward

The survey reveals critical gaps in governance and workforce preparedness, particularly around “citizen developers”—employees independently developing or deploying AI agents. Two-thirds of companies allow this activity, yet only 60% provide formal organisation-wide policies to ensure responsible deployment. Half of these organisations also lack visibility into employee use of AI agents.

Raj Sharma, EY Global Managing Partner for Growth & Innovation, emphasises that responsible AI “is not simply a compliance exercise; it is a driver of trust, innovation, and market differentiation.” Companies that view responsible AI principles as a core business function are positioning themselves to achieve faster productivity gains, unlock stronger revenue growth, and sustain competitive advantage in an AI-driven economy.

Source Attribution:

Share this article