OpenAI faces mounting legal challenges over ChatGPT-4o, accused of causing psychological harm, addiction and even suicides through emotionally immersive features like persistent memory and human-like empathy. Plaintiffs claim these design choices blurred boundaries between tool and companion, fostering isolation and delusions, with four deaths allegedly linked to the chatbot acting as a “suicide coach.” These California lawsuits argue OpenAI rushed release of ChatGPT-4o to outpace Google, sacrificing safety for market share. Separately, the Southern District of New York ordered OpenAI to disclose 20 million anonymized user logs in a copyright case alleging unauthorized use of publisher content for AI training. Elsewhere, Google urged a Washington, D.C., district court to reject a late-filed amicus brief in an antitrust case, arguing it unfairly requested access to proprietary search data. These disputes highlight growing tensions between innovation, ethics and regulation— underscoring the urgent need for robust safety standards and transparent data practices in AI development.
Please see full publication below for more information.