TL;DR: The UK Department for Education has suspended AI-driven attendance reports just days after launch, following complaints about data inaccuracies and unreliable school comparisons. The attendance baseline improvement expectation (ABIE) reports, which used AI to generate targets and identify similar high-performing schools, contained incorrect attendance figures and inappropriate benchmark schools.

Rapid Suspension Following Accuracy Complaints

Last Wednesday, ministers announced every school would receive ABIE reports containing attendance targets and names of similar high-performing schools, both generated using AI. The initiative aimed to help bring attendance “back to – and beyond – pre-pandemic levels,” according to Education Secretary Bridget Phillipson.

By Friday, headteachers and trust CEOs reported discrepancies in the data. Nigel Attwood, a Birmingham headteacher, found incorrect attendance figures in his report. “It’s embarrassing [for government]. We’ve got all the information under the sun for attendance [already], why do we need something else?” Attwood stated.

The Confederation of School Trusts informed leaders on Friday that reports had “been temporarily suspended,” noting some leaders cited apparent inaccuracies, though the DfE did not confirm this as the reason.

Flawed School Comparison Algorithms

Beyond data accuracy issues, school leaders questioned the reliability of AI-generated school comparisons. One trust executive noted their academy in an area with grammar schools was benchmarked against schools in local authorities without selective education—a fundamental contextual mismatch affecting comparison validity.

A deputy head of a standalone academy in the north east received four “similar schools” based on metrics and geography, all located over five hours away and part of large academy trusts. One trust included schools in London, raising questions about the AI’s understanding of meaningful similarity factors for attendance benchmarking.

Implementation Timeline and Accountability Status

The government positioned this year’s targets as “indicative,” with “official” targets scheduled for September delivery. Whilst not classified as accountability measures or used for intervention, schools failing to meet expectations “may receive an offer of additional support to improve.”

Officials encouraged leaders to contact identified similar schools with higher attendance, seeking insights on positive attendance culture, data tracking, communication strategies and family engagement approaches to remove attendance barriers.

DfE Response and Recovery Timeline

The DfE confirmed Friday that “reports are currently down,” expecting availability this week, but declined to specify what went wrong with the system. The rapid suspension—within days of launch—suggests significant technical or methodological issues requiring resolution before re-release.

The incident highlights challenges in deploying AI systems for complex policy implementation where contextual factors significantly influence appropriate comparisons and targets. Education Secretary Phillipson’s stated aim to drive best-practice attendance approaches “everywhere” through AI-generated targeting appears compromised by the technology’s inability to accurately capture the nuanced circumstances affecting individual school contexts.

The suspension raises broader questions about AI readiness for high-stakes education policy tools, particularly where automated systems generate targets and comparisons influencing school improvement strategies and potential intervention decisions. Whether revised reports will incorporate improved validation mechanisms or human oversight remains unclear.


Source: Schools Week

Share this article