“ »It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programming, » the lawsuit states.The suit argues that the concerning interactions experienced by the plaintiffs’ children were not « hallucinations, » a term researchers use to refer to an AI chatbot’s tendency to make things up. « This was ongoing manipulation and abuse, active isolation and encouragement designed to and that did incite anger and violence. »According to the suit, the 17-year-old engaged in self-harm after being encouraged to do so by the bot, which the suit says « convinced him that his family did not love him. »”
Source : Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits : NPR