Notifications
Clear all

AI Puts Man In Hospital

 
(@declan-walker)
Noble Member

A 60-year-old man was hospitalized for three weeks after following dietary advice he received from ChatGPT, in which he replaced table salt (sodium chloride) with sodium bromide.

Medical professionals from the University of Washington published a case report detailing how the man, who had no prior mental health issues, arrived at the hospital convinced that his neighbor was trying to poison him. He was reportedly distilling his own water and had deep mistrust of anything offered to drink. Within 24 hours of admission, he became increasingly paranoid and began experiencing both auditory and visual hallucinations, eventually being placed under an involuntary psychiatric hold for grave disability.

After being stabilized, he revealed that he had embarked on a “personal experiment” to eliminate table salt from his diet, influenced by concerns about its health impact and prompted—via ChatGPT—to substitute it with sodium bromide. He had been following this regimen for around three months before being admitted.

Sodium bromide is not a safe dietary substitute; historically, bromide toxicity—known as “bromism”—was once a common cause of psychiatric admissions in the early 20th century. Its symptoms include neurological and psychiatric disturbances (paranoia, hallucinations, confusion, ataxia), dermatological signs (such as acne or cherry angiomas), severe thirst, fatigue, and insomnia.

Doctors treated him with intravenous fluids, electrolyte rebalancing, and antipsychotic medications. His condition improved gradually over three weeks, and he was discharged once stable.

The report’s authors attempted a similar query on ChatGPT themselves using the same early model and found that it also suggested bromide as a substitute for chloride—but without any explicit health warnings or consideration of the user’s intent, a level of inquiry typical of medical professionals.

They emphasized that this case illustrates the potential dangers of using AI tools for health advice without professional guidance. ChatGPT, though informative, can offer misleading or decontextualized suggestions. Even though OpenAI includes disclaimers that the chatbot is not intended for diagnosing or treating health issues and encourages users to seek professional help, many people still rely on AI for health-related decisions.

This incident serves as a powerful reminder: AI can offer suggestions that appear plausible but may be dangerous if taken as health guidance without human oversight.

 

Source: NBC NEWS


Quote
Topic starter Posted : 18/08/2025 11:10 am