TL;DR
University of Reading research demonstrates that five minutes of training significantly improves people’s ability to identify AI-generated faces. Testing 664 participants on StyleGAN3-created faces, the study found untrained typical observers achieved 31% accuracy (below chance level of 50%), whilst super-recognisers scored 41%. After brief training highlighting common rendering errors—unusual hair patterns, incorrect tooth counts—typical observers reached 51% accuracy and super-recognisers achieved 64%. The research, published in Royal Society Open Science, addresses security risks from computer-generated faces used in fake social media profiles and identity verification bypass.
Opening
As AI-generated faces become increasingly realistic—often judged more realistic than actual human faces—researchers have demonstrated that brief, targeted training can substantially improve detection capabilities. The University of Reading-led study provides quantitative evidence that human observers can learn to identify synthetic faces through recognition of specific rendering artifacts.
Context: Training Impact and Detection Challenges
The multi-university study (Reading, Greenwich, Leeds, Lincoln) tested participants against StyleGAN3-generated faces, representing the most advanced synthesis technology available at study commencement. This posed significant challenges compared to earlier research using older software—participants demonstrated poorer baseline performance than in previous studies using less sophisticated generation systems.
Without training, both typical observers and super-recognisers performed worse than random guessing (50% chance level). The training procedure focused on computer rendering mistakes: unusual hair patterns, incorrect numbers of teeth, and other artifacts characteristic of StyleGAN3 output. This brief intervention improved typical observer accuracy by 20 percentage points (31% to 51%) and super-recogniser accuracy by 23 percentage points (41% to 64%).
Crucially, training affected both groups equally in percentage point terms, suggesting super-recognisers may use different visual cues than typical observers when identifying synthetic faces rather than simply being better at spotting rendering errors. This finding has implications for how different detection approaches might complement each other.
Dr Katie Gray, lead researcher, emphasised practical security concerns: “Computer-generated faces pose genuine security risks. They have been used to create fake social media profiles, bypass identity verification systems and create false documents.” The training procedure’s brevity and ease of implementation makes it viable for real-world deployment.
Looking Forward
Future research will examine whether training effects persist over time and how super-recognisers’ skills might complement AI detection tools. The combination of human detection (particularly super-recognisers with training) and automated systems could provide layered verification for identity-critical applications.
The study’s use of StyleGAN3—advanced but not the absolute latest generation—highlights an ongoing challenge: as synthesis technology improves, detection training must continuously update to address new rendering patterns. The research suggests brief, focused training on current artifacts remains effective, but maintaining detection capability requires understanding how rendering errors evolve across AI generations.
For organisations managing identity verification, the findings indicate that training customer service staff or verification personnel to spot common rendering errors could provide cost-effective supplemental security, particularly when combined with algorithmic detection systems.
Source: University of Reading