EA Faces Internal AI Resistance as Employees Question Mandated Tools

TL;DR: Electronic Arts’ aggressive AI adoption strategy has created significant internal friction, with employees reporting that mandated AI tools produce flawed code and hallucinations whilst raising concerns about job security. The divide reflects broader workplace tensions as 92% of C-suite leaders expect productivity gains whilst 40% of employees blame AI for increased workloads.

A recent meme circulating on Electronic Arts’ internal Slack captures the mounting tension: cartoon CEOs demanding “AI! Right now!” without knowing what AI should actually do. The post, met with dozens of crying-laughing emojis from EA employees, illustrates a fundamental divide between leadership enthusiasm and workforce scepticism that’s roiling the video game giant and corporate workplaces globally.

The Mandate vs Reality Gap

Electronic Arts has spent the past year urging its nearly 15,000 employees to integrate AI into virtually every aspect of their work—from creative tasks like generating code and concept art to sensitive managerial conversations about pay and promotions. Internal documents show employees in some divisions must complete multiple AI training courses, use AI tools daily, and view generative AI as a “thought partner.”

However, employees speaking anonymously to Business Insider report significant quality issues with these mandated tools, particularly the company’s in-house chatbot ReefGPT. Staff describe outputs requiring substantial correction work, with flawed code and AI hallucinations creating additional workload rather than reducing it.

Most concerning for many employees: the expectation that creative staff train AI programs on their own work, potentially accelerating their own obsolescence. One recently laid-off senior quality assurance designer suspects AI’s ability to review and summarise feedback from hundreds of play testers contributed to his position being eliminated during spring 2024 layoffs affecting approximately 100 Respawn Entertainment studio colleagues.

Industry-Wide Creative Resistance

The friction at EA reflects broader creative industry concerns. A 2025 survey of 3,000 video game creators found nearly a third reporting negative impacts from generative AI—a 12-point increase from 2024. About half expressed serious ethical concerns, up from 42% the previous year, citing intellectual property theft, energy consumption, and potential biases.

“It’s a problem when the dogs won’t eat the dog food,” notes Doug Creutz, TD Cowen analyst covering entertainment. The resistance from workers who pioneered online networks, mobile apps, and virtual worlds raises fundamental questions about AI adoption strategies across all industries.

The Leadership-Employee Perception Divide

Two recent global surveys underscore the widening gap:

  • Dayforce study (7,000 professionals): 87% of executives use AI daily, compared with 57% of managers and just 27% of employees
  • Upwork study (2,500 respondents): 92% of C-suite leaders expect AI productivity boosts, whilst 40% of employees blame it for heavier workloads

Financial Pressure Drives AI Push

EA’s AI embrace coincides with financial challenges. Net income fell 9.4% in the fiscal year ending June 30, 2025, with Q4 showing a 28% plunge. The broader gaming industry has contracted significantly, with an estimated 14,600 jobs cut in 2024 alone following pandemic-era expansion.

Company CEO Andrew Wilson described AI as existential for EA’s future at a September 2024 Investor Day: “This remarkable technology is not merely a buzzword for us. It’s the very core of our business.”

Yet EA’s own SEC filing acknowledges risks: “The use of artificial intelligence might present social and ethical issues that, if not managed appropriately, may result in legal and reputational harm.”

Looking Forward

Research suggests successful AI adoption requires carefully matching technology to specific tasks. MIT professor Jackson G. Lu’s meta-analysis of 163 studies found people prefer AI for tasks where machines demonstrably outperform humans and personalisation isn’t critical—such as forecasting or pattern recognition.

However, in highly personalised, identity-laden, or creative work, employees strongly prefer human involvement. Lu recommends leaders start by deploying AI for clear numeric estimation and forecasting tasks before gradually incorporating it into work requiring taste, fairness, or empathy—whilst maintaining human oversight.

For EA and similar organisations, bridging the AI divide may require moving beyond mandates to demonstrate genuine value. As Creutz observes about worker scepticism: “It sort of goes back to Charlie Brown and Lucy pulling the football away. There’s a fundamental lack of trust.”

Until that trust gap closes, even industry leaders pioneering digital transformation may find their own workforce resistant to the next technological revolution.


Source Attribution:

Share this article