September 30, 2025
Back to all stories

OpenAI may alert police when teens discuss suicide

OpenAI CEO Sam Altman said the company is considering a policy to notify authorities if young people seriously discuss suicide on ChatGPT and parents cannot be reached. The change, framed in response to recent lawsuits (including the Adam Raine case) and research about teen reliance on AI, is part of a newly announced 120‑day plan that creates an Expert Council on Well‑Being and a Global Physician Network and will roll out stronger safeguards and parental controls for teens.

AI & Tech Public Safety

🔍 Key Facts

  • Sam Altman said OpenAI could call authorities when young people discuss suicide and parents are unreachable.
  • OpenAI outlined a 120‑day plan, created an Expert Council on Well‑Being and cites a Global Physician Network of 250+ doctors across 60 countries.
  • Company cited scale: ~15,000 suicides worldwide per week and estimated ~1,500 suicidal interactions with ChatGPT weekly (based on 10% user penetration).

📍 Contextual Background

  • U.S. federal law Section 230 provides online platforms with broad legal protections that allow them to make content-moderation decisions without being held liable for those decisions.
  • OpenAI allows users as young as 13 to sign up for ChatGPT.
  • YouTube and its parent company Alphabet agreed to pay a total of $24.5 million to settle a lawsuit brought by Donald J. Trump over the temporary suspension of his YouTube account after the 2021 U.S. Capitol attack.
  • In January 2025, Meta paid $25 million to Donald Trump to settle his lawsuit over Facebook's and Instagram's suspensions of Trump's accounts after January 6, 2021.
  • X (formerly Twitter) paid $10 million to settle a lawsuit by Donald Trump over suspensions related to the January 6, 2021 Capitol attack.
  • OpenAI added an "Instant Checkout" feature to ChatGPT that lets users purchase products mentioned by the chatbot within the chat without leaving the app.

📰 Sources (1)