98% of Market Researchers Use AI Daily, but 4 in 10 Say It Makes Errors

TL;DR: A survey of 219 US market researchers reveals 98% now use AI tools, with 72% using them daily or more. Whilst 56% save at least 5 hours weekly, 39% report increased reliance on error-prone technology, 37% cite data quality risks, and 31% describe additional validation work—revealing a productivity paradox where time savings create new verification burdens.

Market researchers have embraced artificial intelligence at a staggering pace, with 98% of professionals now incorporating AI tools into their work and 72% using them daily or more frequently, according to a new industry survey that reveals both the technology’s transformative promise and its persistent reliability problems.

The findings, based on responses from 219 U.S. market research and insights professionals surveyed in August 2025 by QuestDIY, a research platform owned by The Harris Poll, paint a picture of an industry caught between competing pressures: the demand to deliver faster business insights and the burden of validating everything AI produces to ensure accuracy.

The Productivity Paradox

Whilst more than half of researchers—56%—report saving at least five hours per week using AI tools, nearly four in ten say they’ve experienced “increased reliance on technology that sometimes produces errors.” An additional 37% report that AI has “introduced new risks around data quality or accuracy,” and 31% say the technology has “led to more work re-checking or validating AI outputs.”

The disconnect between productivity gains and trustworthiness has created what amounts to a grand bargain in the research industry: professionals accept time savings and enhanced capabilities in exchange for constant vigilance over AI’s mistakes, a dynamic that may fundamentally reshape how insights work gets done.

From Skeptics to Daily Users in Under a Year

The numbers suggest AI has moved from experiment to infrastructure in record time. Among those using AI daily, 39% deploy it once per day, whilst 33% use it “several times per day or more.” Adoption is accelerating: 80% of researchers say they’re using AI more than they were six months ago, and 71% expect to increase usage over the next six months. Only 8% anticipate their usage will decline.

“Whilst AI provides excellent assistance and opportunities, human judgement will remain vital,” Erica Parker, Managing Director Research Products at The Harris Poll, told VentureBeat. “The future is a teamwork dynamic where AI will accelerate tasks and quickly unearth findings, whilst researchers will ensure quality and provide high level consultative insights.”

Primary Use Cases

The top use cases reflect AI’s strength in handling data at scale:

  • 58% use it for analysing multiple data sources
  • 54% for analysing structured data
  • 50% for automating insight reports
  • 49% for analysing open-ended survey responses
  • 48% for summarising findings

These tasks—traditionally labour-intensive and time-consuming—now happen in minutes rather than hours.

Beyond time savings, researchers report tangible quality improvements: 44% say AI improves accuracy, 43% report it helps surface insights they might otherwise have missed, 43% cite increased speed of insights delivery, and 39% say it sparks creativity. The overwhelming majority—89%—say AI has made their work lives better, with 25% describing the improvement as “significant.”

The Reliability Problem

Yet the same survey reveals deep unease about the technology’s reliability. The list of concerns is extensive:

  • 39% report increased reliance on error-prone technology
  • 37% cite new risks around data quality or accuracy
  • 31% describe additional validation work
  • 29% report uncertainty about job security
  • 28% say AI has raised concerns about data privacy and ethics

The report notes that “accuracy is the biggest frustration with AI experienced by researchers when asked on an open-ended basis.” One researcher captured the tension succinctly: “The faster we move with AI, the more we need to check if we’re moving in the right direction.”

This paradox—saving time whilst simultaneously creating new work—reflects a fundamental characteristic of current AI systems, which can produce outputs that appear authoritative but contain what researchers call “hallucinations,” or fabricated information presented as fact. The challenge is particularly acute in a profession where credibility depends on methodological rigour and where incorrect data can lead clients to make costly business decisions.

AI as Junior Analyst

“Researchers view AI as a junior analyst, capable of speed and breadth, but needing oversight and judgement,” said Gary Topiol, Managing Director at QuestDIY.

That metaphor—AI as junior analyst—captures the industry’s current operating model. Researchers treat AI outputs as drafts requiring senior review rather than finished products, a workflow that provides guardrails but also underscores the technology’s limitations.

Data Privacy: The Greatest Barrier

When asked what would limit AI use at work, researchers identified data privacy and security concerns as the greatest barrier, cited by 33% of respondents. This concern isn’t abstract: researchers handle sensitive customer data, proprietary business information, and personally identifiable information subject to regulations like GDPR and CCPA.

Other significant barriers include:

  • Time to experiment and learn new tools (32%)
  • Training (32%)
  • Integration challenges (28%)
  • Internal policy restrictions (25%)
  • Cost (24%)
  • Lack of transparency in AI use (31%)

The transparency issue is particularly thorny. When an AI system produces an analysis or insight, researchers often cannot trace how the system arrived at its conclusion—a problem that conflicts with the scientific method’s emphasis on replicability and clear methodology.

The Emerging Workflow Model

Despite these challenges, researchers aren’t abandoning AI—they’re developing frameworks to use it responsibly. The consensus model is “human-led research supported by AI,” where AI handles repetitive tasks like coding, data cleaning, and report generation whilst humans focus on interpretation, strategy, and business impact.

About one-third of researchers (29%) describe their current workflow as “human-led with significant AI support,” whilst 31% characterise it as “mostly human with some AI help.”

Looking to 2030

Looking ahead to 2030, 61% envision AI as a “decision-support partner” with expanded capabilities including:

  • Generative features for drafting surveys and reports (56%)
  • AI-driven synthetic data generation (53%)
  • Automation of core processes like project setup and coding (48%)
  • Predictive analytics (44%)
  • Deeper cognitive insights (43%)

The report describes an emerging division of labour where researchers become “Insight Advocates”—professionals who validate AI outputs, connect findings to stakeholder challenges, and translate machine-generated analysis into strategic narratives that drive business decisions.

“AI can surface missed insights—but it still needs a human to judge what really matters,” Topiol said.

The Strange Persistence of Trust Issues

The survey’s most striking finding may be the persistence of trust issues despite widespread adoption. In most technology adoption curves, trust builds as users gain experience and tools mature. But with AI, researchers appear to be using tools intensively whilst simultaneously questioning their reliability—a dynamic driven by the technology’s pattern of performing well most of the time but failing unpredictably.

This creates a verification burden that has no obvious endpoint. Unlike traditional software bugs that can be identified and fixed, AI systems’ probabilistic nature means they may produce different outputs for the same inputs, making it difficult to develop reliable quality assurance processes.

Broader Implications

The market research industry’s AI adoption may presage similar patterns in other knowledge work professions. The experience of researchers—early AI adopters who have integrated the technology into daily workflows—offers lessons about both opportunities and pitfalls:

First, speed genuinely matters. Velocity enables researchers to respond to business questions within hours rather than weeks, making insights actionable whilst decisions are still being made.

Second, productivity gains are real but uneven. Savings can disappear if spent validating AI outputs or correcting errors. Net benefit depends on the specific task, tool quality, and user skill in prompting and reviewing.

Third, required skills are changing. Future competencies include cultural fluency, strategic storytelling, ethical stewardship, and “inquisitive insight advocacy”—the ability to ask the right questions, validate AI outputs, and frame insights for maximum business impact.

The Fundamental Tension

Whether this transformation marks an elevation of the profession or a deskilling depends partly on how the technology evolves. If AI systems become more transparent and reliable, the verification burden may decrease. If they remain opaque and error-prone, researchers may find themselves trapped in an endless cycle of checking work produced by tools they cannot fully trust or explain.

Researchers are moving faster than ever, delivering insights in hours instead of weeks, and handling analytical tasks that would have been impossible without AI. But they’re doing so whilst shouldering a new responsibility: serving as the quality control layer between powerful but unpredictable machines and business leaders making million-pound decisions.

The industry has made its bet. Now comes the harder part: proving that human judgement can keep pace with machine speed—and that the insights produced by this uneasy partnership are worth the trust clients place in them.


Source Attribution:

Share this article