Retrieval-Augmented Generation Can Manage Expectations of AI
TL;DR: As AI adoption accelerates (39% of UK organisations already using it), retrieval-augmented generation (RAG) addresses accuracy concerns by grounding responses in verified data. This approach builds trust through realistic expectations rather than demanding perfection, enabling responsible incremental adoption.
Philip Miller, AI Strategist at Progress Software, explores how organisations can rethink AI implementation by focusing on small use cases, continuous testing, and RAG frameworks that anchor outputs in contextually accurate information.
Context and Background
With AI adoption rising across finance, healthcare, manufacturing, and retail sectors, the debate has shifted from “whether” to “how quickly—and where.” However, rising implementation brings unrealistic expectations: whilst people accept human errors as learning opportunities, they assume AI must deliver flawless outputs every time.
This double standard damages trust, slows adoption, and holds back innovation. When AI delivers an imperfect answer, many conclude the technology isn’t ready for wider deployment—yet these errors aren’t bugs; they’re expected trade-offs of probabilistic models.
Expecting flawless AI performance is analogous to hiring an employee and demanding their work be perfect every time. AI models fail, learn, and improve far faster than human learning cycles (days or minutes versus weeks/months), suggesting our deployment approaches should be equally flexible.
Changing Perspectives on AI Errors
Organisations that adopt multi-year, top-down transformation plans risk waiting for a “perfect” AI version that may never arrive. Instead, short-term incremental projects deliver value quickly before scaling.
Responsible AI in practice requires:
- Achievable goals: Target use cases deliverable in weeks or months to generate early wins demonstrating tangible value and building confidence
- Learning from mistakes: Treat each AI error as a learning opportunity—analysing errors, refining prompts, experimenting with different models improves performance over time
- Gradual expansion: Once initial use cases deliver benefits, expand adoption across the wider organisation with maintained oversight ensuring outputs remain accurate, relevant, and ethical
Building Trust Through RAG
Retrieval-augmented generation (RAG) represents one of the most effective reliability improvements. Within a RAG framework, AI systems access relevant, up-to-date information from various sources before generating responses.
This ensures outputs are anchored in verified, contextually accurate data rather than relying solely on potentially outdated or incomplete patterns learned during training.
RAG benefits:
- Reduced hallucinations: Grounding responses in verified sources
- Context-aware answers: Accessing current, relevant information
- Increased stakeholder confidence: Demonstrable accuracy builds trust
- Continuous improvement: Embedding careful, iterative AI use creates feedback loops strengthening trust and ensuring actionable insights
By connecting AI to data correctly, organisations can deliver reliable outputs critical for responsible adoption at scale.
Looking Forward
Every organisation operating within the AI era faces the same trust challenges. What separates success from failure is the ability to anticipate errors, design working methods that identify them quickly, and adapt accordingly.
AI is neither infallible nor magical, but it is a powerful resource. Organisations that balance ambition with realism—focusing on achievable goals, learning from mistakes, and leveraging RAG frameworks for accuracy—will be the ones that succeed.
As Miller notes: “AI models are inherently imperfect, so each mistake should be treated as an important learning opportunity. Small adjustments allow teams to continuously enhance results whilst keeping projects manageable.”
Source Attribution:
- Source: TechRadar Pro
- Original: https://www.techradar.com/pro/retrieval-augmented-generation-can-manage-expectations-of-ai
- Published: 30 October 2025
- Author: Philip Miller (AI Strategist, Progress Software)