
When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?
10 hours ago · When Chatbots Are Used to Plan Violence, Is There a Duty to Warn? People are revealing sensitive personal information to A.I. chatbots — including plans to commit violent acts.
When should AI chatbots call the cops? - POLITICO
22 hours ago · The law has required therapists to contact authorities about potentially violent patients since the 1970s, and there have been various proposals to hold chatbot operators to similar standards.
When Chatbots Are Used to Plan Violence, Is There a Duty to Warn?
20 hours ago · Users are increasingly disclosing sensitive personal information, including violent intentions, to AI chatbots. This trend raises ethical questions about whether chatbot developers have …
When AI Chatbots Encourage Violence - Psychology Today
Nov 6, 2025 · In this post, I’ll examine cases in which AI chatbots have encouraged violence toward others and, in one case, resulted in murder- suicide.
Chatbots could be harmful for teens' mental health and social ...
Dec 29, 2025 · Teen use of AI chatbots is growing, and psychologists worry it's affecting their social development and mental health. Here's what parents should know to help kids use the technology …
AI Chatbots and Criminal Intent: Who Bears Responsibility ...
20 hours ago · The challenge ahead isn’t choosing between privacy and safety, but designing systems intelligent enough to honor both values. Source: New York Times – When Chatbots Are Used to Plan …
The dark side of AI chatbots: Lies, violent suggestions
Jul 28, 2025 · (NewsNation) — AI chatbots are becoming a part of everyday life, but concern over the tech is mounting. A new study said AI’s answers to medical questions were incorrect 88% of the time.