TL;DR: Large language models have made recruitment demonstrably less meritocratic and less efficient, with research showing AI-generated cover letters and job postings destroying the signalling mechanisms that previously identified committed, capable candidates and serious employers.
Two comprehensive recent studies reveal an efficiency paradox: AI makes application writing faster for candidates and job posting faster for employers, yet makes the recruitment system as a whole completely inefficient whilst reducing meritocracy.
The Cover Letter Signal Collapse
Research from Dartmouth’s Anaïs Galdin and Princeton’s Jesse Silbert examined how generative AI affected hiring on online freelancing platforms. Initially, the results appeared promising: AI-generated cover letters were faster to produce, higher quality, and more tailored to specific jobs than pre-LLM applications.
However, every subsequent outcome was negative. Before LLMs, good cover letters signalled both candidate quality and commitment—people who wrote better applications were much more likely to get hired. After LLMs, hiring rates were identical for good and poor applications, and overall hiring fell.
Employers no longer trusted quality applications to represent quality applicants, questioning every pitch they read. They were correct to be distrusting: evaluations of work completed by post-LLM hires showed no correlation between cover letter quality and actual performance.
The use of LLMs made the market measurably less meritocratic, with a 19% fall in hiring for the most capable workers and a 14% boost for the least capable. Simultaneously, application numbers and cover letter lengths roughly doubled, forcing employers to sift through more content that was less useful and often actively misleading.
The Job Posting Signal Collapse
A parallel study examined what happened when employers used AI to write job postings. Again, LLMs helped employers write more postings faster, and AI-generated postings attracted more applications than manually-written ones.
Yet despite more postings and more applicants, hiring outcomes were unchanged. Many extra AI-induced posts came from employers uncertain whether they actually wanted the role filled—AI made it cheap and easy to post halfhearted adverts for jobs that never really existed.
In a mirror image of the cover letter study, AI removed the commitment signal an employer sends when posting a job. The correlation between time spent writing a job advert and applicant numbers flipped from positive to negative.
Whilst LLMs saved employers time writing postings, jobseekers wasted substantially more time and effort on pointless applications to phantom positions. This aligns with UK reporting identifying “ghost” job adverts as a significant factor in graduate unemployment.
Employer Responses and Trade-Offs
Employers report drowning in applications, with widespread AI use preventing identification of best candidates across online assessments—not just CVs or application forms, but asynchronous video interviews, technical proficiency assessments, and psychometric tests.
Proposed responses include: platforms introducing features to flag “high-intent” candidates (one tagged position per month); increased reliance on online assessments less vulnerable to AI; technical fixes monitoring mouse movements or eye-tracking; returning to face-to-face or telephone interviews earlier in processes; greater emphasis on employee referrals; targeted job advertising (posting only to elite university job boards); and return of in-person assessment centres with exam-condition testing.
Many responses conflict with goals of widening accessibility, reducing human bias, and moving beyond school or university as quality signals. Trade-offs exist everywhere.
The research demonstrates that technology can simultaneously make tasks more efficient for candidates (applying for jobs) and employers (writing adverts) whilst making the entire system completely inefficient. Tasks and systems are not the same.
Source: Financial Times