TL;DR

Gartner predicts over 40% of agentic AI projects will be cancelled by end of 2027, often due to inadequate risk controls and uncertain ROI. Technology leaders must shift focus from compelling pilots to operational readiness—treating AI as an enterprise application requiring unified platforms spanning compute, data, and governance to ensure successful scaling beyond experimentation.

The Gap Between Enthusiasm and Execution

Artificial intelligence is often pitched as an “existential” issue for enterprises. However, evidence is piling up that many AI projects fail to make it beyond experimentation or pilot stages, despite undoubted enthusiasm and investment in AI tools.

Gartner’s prediction that over 40% of agentic AI projects will be cancelled by the end of 2027—often because of inadequate risk controls and uncertain ROI—highlights the failure to launch that wastes investment and corrodes long-term confidence in the technology.

This creates a gap between organisations steadily making the transition to enterprise AI and those struggling to make it work. The gap will only widen as generative AI gives way to agentic AI.

Vision Alone Insufficient for Scale

Having a vision is key to AI success, as is having data to inform distinctive AI strategy. Together with seed investment, this might deliver a dazzling pilot. But Gartner’s figures show clearly this isn’t enough to ensure success at enterprise scale.

The missing element is operational readiness for AI: the ability to deploy, manage, and scale AI out of labs and across entire organisations. This means putting in hard work to ensure what starts as compelling but disconnected pilot is woven into the enterprise at large.

It requires ensuring AI runs across a unified platform spanning compute, data, and governance—a platform that can be replicated throughout the organisation, whether on-premises, in the cloud, or at the edge.

Infrastructure Beyond GPUs

It’s easy to think AI infrastructure management begins and ends with GPUs. However, high bandwidth memory, fast storage, and matching networking all play their part, as do other processors and accelerators depending on workflow stages.

Most importantly, that infrastructure—whether on-premises, in the cloud, or hybrid—needs to adapt and scale as projects move from local pilot to enterprise production. AI, by its nature, may be much more sticky than traditional corporate workloads.

Security and governance is non-negotiable when it comes to enterprise AI projects. Underlying data and organisations’ own models are key to their future and must be held close. Data sovereignty and AI regulations further complicate matters. Technology leaders need to know their data is where they say it is, and be clear exactly who can—and cannot—access it.

Managing Costs and Energy Consumption

The possibilities of AI are limitless, but so is the price tag if underlying infrastructure is not managed appropriately. Simply paying for GPUs and the power to run them, then leaving them underutilised, blows holes in ROI and undermines ESG commitments.

Technology leaders need to plan how they scale capacity up—and down—from the outset. They also need to manage and predict costs, requiring confidence that their platform and toolkit allows them to do this easily.

This becomes even more critical as AI agents enter the picture. Security, governance, and compliance must be ensured even as agents access and generate data and make decisions. Infrastructure must support them and handle spikes in demand as their actions play out. The location of assets must be considered to reduce latency for inference workloads running in real time, and energy use must be kept within acceptable bounds.

Turnkey Approach to Operational Readiness

True operational readiness demands a turnkey approach to AI in the shape of a full-stack platform with the ability to span GPUs and other needed accelerators. It must include integrated data services supporting the full range of formats AI will need, together with matching security and governance controls.

Platforms should support both VMs and containers, with the ability to orchestrate these. Racing towards operationalising AI is challenging enough—no-one wants to run cloud-native migration simultaneously.

LLMs might not always deliver repeatable answers, but the infrastructure generative AI and agentic AI relies on must be repeatable if companies are to scale it in line with demand. This includes the cloud, as well as on-premises and edge environments.

Looking Forward

When they have the right platform and tools, technology leaders can ensure staff focus on steadily maximising value gained from AI investments—not spending time and resources transforming successful pilot projects into enterprise-wide strategies.

Whether betting the farm on AI or recognising AI will be part of broader toolkits, technology leaders must recognise AI is an enterprise application. Enterprise applications need enterprise-grade infrastructure supporting them from pilot stage, into production, and into the future.

That’s what will assure organisations’ existence in the long term. The gap between AI experimenters and AI executers will only widen as agentic capabilities advance—making operational readiness the determining factor in whether organisations capture AI’s value or join the 40% of cancelled projects.


Source: TechRadar

Share this article