Police Handcuff US Student After AI Mistakes Bag of Crisps for Gun
TL;DR: A 16-year-old student at Kenwood High School in Baltimore was handcuffed by armed police after an AI surveillance system misidentified an empty bag of Doritos as a weapon. Despite safety staff clearing the false positive, communication failures resulted in a traumatic police response that school officials defend as the system “working as designed.”
A Baltimore high school student experienced a distressing encounter with armed police after an artificial intelligence weapon detection system incorrectly flagged him as a security threat for possessing an empty bag of crisps.
The Incident
Taki Allen, a 16-year-old student at Kenwood High School, was waiting for collection after football practice when he finished eating a bag of Doritos and placed the empty packet in his pocket. Within twenty minutes, armed police officers arrived with weapons drawn and handcuffed the teenager.
The school’s AI surveillance system, provided by Omnilert, had generated an alert identifying Allen’s snack packet as a potential firearm. This alert triggered a sequence of events that culminated in a police response despite subsequent human review clearing the false positive.
System Response and Communication Breakdown
The school’s principal confirmed that following the AI system’s initial alert, safety staff reviewed the situation and determined no weapon was present. However, this clearance failed to reach the school resource officer, who proceeded to contact police.
This communication failure—the gap between human operators clearing the false positive and law enforcement receiving that information—represents a critical flaw in the system’s operational workflow. The student experienced the full police response protocol despite the threat having been ruled out by school safety personnel.
Official Response and Justification
Superintendent Dr. Myriam Rogers defended the system’s performance, stating it “did what it was supposed to do, which was to signal an alert and for humans to take a look.”
Omnilert, the AI surveillance system provider, acknowledged the incident as a “false positive” whilst maintaining that the response followed the system’s designed protocols. This characterisation positions the traumatic police encounter as an acceptable outcome of the technology functioning as intended.
Impact on Student
Allen described significant psychological impact from the encounter. Speaking to reporters, he stated: “I don’t think no chip bag should be mistaken for a gun at all.”
The teenager now avoids outdoor activities after football practice, fearing for his personal safety. This behavioural change—a student altering daily routines due to fear of AI misidentification and subsequent police response—represents substantial harm beyond the immediate incident.
Systemic Concerns and Oversight
A local council member called for the school district to conduct a comprehensive review of its AI weapon detection procedures following the incident. However, Superintendent Rogers indicated no such evaluation would occur, effectively defending the current system despite its demonstrable failure mode.
This administrative response raises questions about accountability frameworks for AI surveillance systems in educational settings. When a false positive results in a traumatic police encounter with an innocent student, and officials characterise this as the system “working as designed,” it suggests insufficient consideration of the technology’s real-world consequences.
Looking Forward
The incident highlights several critical issues with AI surveillance deployment in schools:
Communication Protocols: The gap between human operators clearing false positives and law enforcement receiving that information represents a dangerous procedural flaw. Systems designed to trigger armed police responses require robust, real-time communication channels to prevent unnecessary escalations.
Acceptable Error Rates: When false positives can result in armed police encounters with minors, what constitutes an acceptable error rate? Omnilert’s acknowledgment of a “false positive” treats the incident as a statistical inevitability rather than a serious operational failure with significant consequences for the student involved.
Psychological Harm: Allen’s altered behaviour—avoiding outdoor activities due to fear of misidentification—demonstrates that false positives in surveillance systems carry costs beyond immediate confrontation. Students subjected to traumatic police encounters may experience lasting effects on their sense of safety and trust in school environments.
Accountability and Review: The superintendent’s decision against comprehensive review following a false positive that resulted in police handcuffing a student raises questions about oversight mechanisms. If such incidents don’t trigger procedural evaluation, what threshold of harm would prompt systemic reassessment?
Technology Limitations: The system’s inability to distinguish between an empty crisp packet and a firearm suggests fundamental limitations in visual recognition technology applied to weapon detection. These limitations carry particularly high stakes when false positives trigger armed police responses.
This case exemplifies the risks of deploying AI surveillance systems without adequate safeguards, clear accountability frameworks, and robust communication protocols to prevent false positives from escalating into harmful real-world consequences—particularly when those consequences involve armed law enforcement and vulnerable populations like students.
Source Attribution:
- Source: Silicon UK
- Original: https://www.silicon.co.uk/e-innovation/artificial-intelligence/student-ai-surveillance-627179
- Published: 27 October 2025
- Author: Matthew Broersma