TL;DR
A Canadian government-commissioned Deloitte health care report costing nearly $1.6 million has been found to contain potentially AI-generated errors, including fabricated academic citations and fictional co-author attributions. This follows similar findings in an Australian government report earlier this year.
Fabricated Research Uncovered
An investigation by The Independent, a Canadian news outlet, discovered multiple issues within a 526-page report advising Newfoundland and Labrador’s government on health care topics including virtual care, retention incentives, and pandemic impacts on health workers.
The report contained false citations from made-up academic papers, cited real researchers on papers they had never worked on, and included fictional papers with co-authors who said they had never collaborated. One citation referenced a paper from the Canadian Journal of Respiratory Therapy that cannot be found in its database.
“It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work,” observed Gail Tomblin Murphy, an adjunct professor at Dalhousie University, who was incorrectly cited in a non-existent paper.
Deloitte’s Response
Deloitte Canada maintains that AI was not used to write the report, stating it was “selectively used to support a small number of research citations.” The firm says it stands behind the report’s recommendations and is making “a small number of citation corrections” that do not impact findings.
The report remains on the Canadian government’s website as of publication.
A Pattern Emerges
The Canadian findings follow revelations last month that Deloitte leveraged AI in a $290,000 Australian government report on welfare enforcement. That 237-page study included references to non-existent academic papers and a fabricated quote from a federal court judgment.
In the revised Australian study, Deloitte acknowledged using Azure OpenAI in the report’s creation. The firm’s Australian member was required to pay a partial refund.
Looking Forward
These incidents highlight significant risks when AI tools are used for research and citation without rigorous verification. For organisations commissioning professional reports, the findings underscore the importance of explicitly addressing AI use in engagement terms and implementing robust fact-checking protocols—particularly for work informing policy decisions.
Source: Fortune