When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?
As users disclose violent intentions to AI systems, companies face a growing dilemma between privacy protections and public safety responsibilities.

The rapid rise of generative AI has introduced a troubling new question: What happens when people use chatbots not just for homework help or coding advice — but to plan acts of violence?
The issue resurfaced following a January 2025 explosion outside the Trump International Hotel in Las Vegas. Matthew Livelsberger, a soldier from Colorado, detonated explosive materials packed inside a Tesla Cybertruck after fatally shooting himself. Seven bystanders were injured.
In the aftermath, investigators discovered that Livelsberger had consulted OpenAI’s chatbot, ChatGPT, days before the attack. According to chat logs later provided to law enforcement, he asked about Tannerite — a legal explosive target compound — including how much he could buy, what caliber firearm could detonate it, and where to obtain supplies during his drive from Colorado to Nevada. He also asked about activating phones without personal identification.
Officials said it was the first confirmed instance of ChatGPT being used in the construction of a bomb on U.S. soil.
What Is News
Matthew Livelsberger consulted ChatGPT prior to detonating explosives in Las Vegas in January 2025.
OpenAI reviewed logs and later shared relevant information with law enforcement.
After the incident, OpenAI created an internal monitoring channel called “AutoInvestigator” to flag concerning activity.
In June 2025, ChatGPT flagged concerning activity from Canadian user Jesse Van Rootselaar but did not alert authorities.
In February 2026, Van Rootselaar killed eight people in British Columbia. OpenAI then contacted law enforcement.
Canadian authorities are now questioning why the company did not report earlier.
The Monitoring Challenge
In response to the Las Vegas attack, OpenAI reportedly built automated systems designed to detect “worrisome activity” among its roughly 800 million weekly users. The system generates alerts when conversations appear to move into dangerous territory.
But determining what constitutes an “imminent” threat is far from straightforward.
In the Canadian case, OpenAI concluded that although the user’s discussions involved gun violence, there was no credible or immediate plan to harm others. The account was banned for policy violations but not reported.
After the mass killing occurred months later, the company contacted authorities.
This sequence has raised questions about whether AI firms should adopt a stronger duty to report potential threats.
What Is Analysis
The dilemma centers on a longstanding tension between user privacy and public safety — now amplified by the intimate nature of AI conversations.
Under U.S. privacy laws established during the early email era, technology companies generally cannot disclose user communications without a court order, except in extreme cases involving child exploitation or imminent threats of serious bodily harm.
However, AI chatbots differ from traditional platforms. They engage users in extended, humanlike dialogue. As Sam Altman has publicly noted, some users treat ChatGPT “as a therapist,” sharing deeply personal thoughts and struggles.
That analogy complicates matters.
Licensed therapists operate under a “duty to warn” principle, requiring them to alert authorities or potential victims if a patient presents a credible threat. But AI companies are not medical providers. They are technology platforms — at least legally.
Some legal scholars argue that if chatbots are functioning as surrogate confidants, companies may need clearer ethical standards. Others caution that over-reporting could chill free expression and flood law enforcement with false alarms.
False positives are a real concern. Users may be writing fiction, conducting academic research, or stress-testing AI systems. Mandating blanket reporting could turn AI firms into de facto government surveillance agents.
There is also a constitutional dimension. If companies are legally required to monitor and report conversations, courts may view them as government actors, potentially triggering Fourth Amendment concerns around unreasonable search.
From a practical standpoint, law enforcement agencies could be overwhelmed by vague or non-actionable reports. Without timely, specific intelligence, authorities may lack the ability to intervene effectively.
Yet critics argue that companies cannot ignore warning signs simply because decisions are difficult. As one former AI investigator noted, chat conversations often provide richer context than simple search queries — making it easier, not harder, to identify credible threats.
Corporate Incentives and Transparency
Another sensitive aspect is corporate reputation. Reporting threats publicly or to authorities may reveal how AI tools are being misused. Some critics suggest companies may hesitate to share information that highlights vulnerabilities in their systems.
OpenAI has stated it is cautious about involving police prematurely, citing the potential harm of wrongful investigations. However, families of victims and civil liberties groups are increasingly demanding clarity on how AI companies balance safety and confidentiality.
The broader policy debate is still evolving. Some advocates propose that chatbot providers file suspicious activity reports similar to financial institutions. Others argue that such mandates would expand executive power and erode user trust.
The Road Ahead
The question of a chatbot “duty to warn” may soon move from academic debate to legislative action. Lawmakers are already examining AI governance frameworks, and violent misuse of AI tools could accelerate calls for regulation.
At its core, the issue reflects a deeper societal shift. AI systems are no longer passive tools. They are interactive partners capable of shaping user decisions in subtle ways.
When those interactions cross into violent planning, the stakes escalate.
Balancing privacy, civil liberties, public safety, and technological innovation will require careful policy design — not reactive measures driven by isolated tragedies.
As AI becomes more embedded in daily life, the question is no longer theoretical: When someone confesses violent intent to a machine, who is responsible for acting — and how soon?
That answer may define the next chapter of AI accountability.




Comments
There are no comments for this story
Be the first to respond and start the conversation.