Finding return on AI investments across industries

TL;DR: Three years post-ChatGPT, the MIT NANDA report shows 95% of AI pilots fail to scale or deliver measurable ROI. Three principles for successful deployment: treat data as negotiable value with model vendors, design for stability over novelty (boring-by-design), and align implementations to business consumption patterns rather than vendor benchmarks (mini-van economics).

The market is officially three years post-ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers.

Context and Background

In September, the MIT NANDA report made waves because 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong.

This places technology leaders in a precarious position—robust tech stacks already sustain their business operations, so what is the upside to introducing new technology?

For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk.

First Principle: Your Data is Your Value

Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities.

However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer.

This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data.

Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet.

Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy.

Second Principle: Boring by Design

According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT-5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked.

Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title. However, behaviour does not translate to business run rate operations. Back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.

The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross-check individual reports but putting the final decision in a human’s responsibility zone combines the best of both.

The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability whilst maintaining options to update or upgrade the underlying engines at the pace of your business.

Third Principle: Mini-van Economics

The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks.

Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today.

Whilst Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimise spending on third-party services.

Too many companies have found that their customer support AI workflows add millions of pounds of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.

Looking Forward

There are so many aspects of this new automation technology to unpack. The best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.

Rather than approaching model purchase and onboarding as a typical supplier/procurement exercise, think through the potential to realise mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.


Source Attribution:

Share this article