When AI Mistakes a Chip Bag for a Gun: A High School Incident That Raises Important Questions
In an alarming case of artificial intelligence gone wrong, 16-year-old Taki Allen found himself surrounded by police officers with guns drawn at Kenwood High School in Essex, Maryland. The teenager had simply placed an empty bag of chips in his pocket while waiting for his ride home after school. What should have been an ordinary Monday afternoon turned into a traumatic experience when the school’s AI security system misidentified the crumpled snack bag as a firearm, triggering an immediate and overwhelming law enforcement response. “Police showed up, like eight cop cars, and then they all came out with guns pointed at me talking about getting on the ground. I was putting my hands up like, ‘What’s going on?'” Allen recounted to local news station WMAR-2. The teenager, who was handcuffed during the incident, was left wondering how something so innocent could be mistaken for something so dangerous.
Body camera footage later released by Baltimore Police revealed the moment of realization when officers discovered the error. After tracing the supposed “weapon” to a nearby trash can, they found only the discarded chip bag that had triggered the alert. One officer can be heard explaining to the stunned students, “I guess just the way you guys were eating chips… It picked it up as a gun. AI’s not the best.” This candid acknowledgment highlighted a growing concern about the reliability of AI surveillance systems being deployed in educational settings. The incident has created ripples of concern among students, parents, school administrators, and city officials, all questioning who bears responsibility for the frightening miscommunication and what safeguards should be in place to prevent similar occurrences.
The school district and the AI system provider, Omnilert, have offered different perspectives on the incident. Superintendent Dr. Myriam Rogers stated that the alert had actually been canceled by the Baltimore County Public Schools Safety Team, but the school principal, unaware of this cancellation, had already contacted their School Resource Officer, setting the police response in motion. Rogers defended the system, saying, “The program is based on human verification and in this case the program did what it was supposed to do, which was to signal an alert and for humans to take a look to find out if there was cause for concern in that moment.” Omnilert similarly maintained that their system operated as designed, with the AI identifying a possible threat that was then elevated for human review, relying on authorized safety personnel for final determination. The company emphasized that once the object was confirmed not to be a firearm and the alert was marked as resolved, they “had no further involvement in any subsequent actions or decisions related to this event.”
The psychological impact on young Taki Allen, however, cannot be understated. The teenager expressed that the incident has fundamentally changed how he feels about his safety at school. “I don’t think no chip bag should be mistaken for a gun at all,” he told reporters. Even more concerning is how the experience has altered his behavior: “I don’t think I’m safe enough to go outside, especially eating a bag of chips or drinking something. I just stay inside until my ride comes.” These words paint a picture of a student whose sense of security and trust has been severely damaged by technology meant to protect him. The fact that a normal teenage behavior—eating snacks after school—could trigger such a frightening response reveals the potential for AI systems to create new anxieties rather than alleviating existing ones.
This incident at Kenwood High School is not occurring in isolation but within a broader national context where schools are increasingly turning to technological solutions to address safety concerns. In the wake of tragic school shootings, districts across the country have invested in various security measures, including AI surveillance systems designed to detect weapons. While the intention behind these implementations is understandable—protecting students and staff from potential violence—the Kenwood incident raises critical questions about their effectiveness, reliability, and the potential for false positives that could traumatize innocent students. It also highlights the fine line between creating secure learning environments and establishing systems that might generate fear or infringe on students’ sense of normalcy and privacy.
The case of Taki Allen serves as an important cautionary tale about the integration of artificial intelligence in security contexts, particularly in schools where the wellbeing of young people must be the paramount concern. It underscores the need for robust human oversight, clear protocols for verification before escalation, and proper training for all personnel involved in responding to AI-generated alerts. As schools continue to navigate the complex landscape of safety measures in the digital age, incidents like this one remind us that technology is not infallible and that the human impact of these systems must always be considered. For Taki Allen and students like him, feeling safe at school isn’t just about being protected from external threats—it’s also about not being mistakenly treated as one. Moving forward, finding this balance will be essential for creating truly secure and supportive educational environments where students can focus on learning rather than fearing that an innocent action might trigger an overwhelming response.









