OpenAI’s risky step to protect teens
ChatGPT accounts will inform parents when their children express thoughts of self-harm. Will the move protect kids — or simply drive them elsewhere?
ChatGPT accounts will inform parents when their children express thoughts of self-harm. Will the move protect kids — or simply drive them elsewhere?
It's the first big developer to say AI companions aren't safe for under-18s. Will others follow?
PLUS: ChatGPT users’ surprising questions for Sam Altman + Grokipedia launches
Millions of people are sending messages to ChatGPT each week suggesting emotional dependence or plans for self-harm, the company says. Will an updated model protect them?
Hitting close to Chrome