When AI Health Advice Turns Dangerous
What started as a simple dietary tweak for a 60-year-old man trying to cut back on table salt ended in a three-week hospital stay, hallucinations, and a diagnosis of bromism — a medical condition so rare today that it’s mostly found in Victorian-era medical literature.
According to an August 5, 2025 case report in the Annals of Internal Medicine, the man sought advice from ChatGPT on replacing sodium chloride in his diet. The AI chatbot recommended sodium bromide, a chemical typically used in swimming pool maintenance, not for seasoning food.
From Kitchen Swap to Psychiatric Hold
With no prior psychiatric or major medical history, the man followed the AI’s suggestion for three months, buying sodium bromide online. His goal was to completely remove chloride from his meals, based on older studies linking high sodium intake to health issues.
By the time he reached the emergency department, he was convinced his neighbour was poisoning him. Blood tests revealed electrolyte imbalances, including hyperchloremia and a negative anion gap, leading doctors to suspect bromism.
Within 24 hours, paranoia escalated into severe hallucinations. He was placed under an involuntary psychiatric hold. Other symptoms included fatigue, insomnia, facial acne, ataxia, and excessive thirst — all classic signs of bromide toxicity.
Bromism: A Relic of 19th-Century Medicine
Bromism was common in the late 1800s and early 1900s, when bromide salts were prescribed for headaches, seizures, and anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. FDA phased out bromide in ingestible products between 1975 and 1989, making modern cases extremely rare.
Bromide accumulates in the body over time, causing neurological, psychiatric, and skin-related symptoms. In this case, the patient’s bromide levels were a staggering 1,700 mg/L — over 200 times the safe limit.
The AI Factor and Safety Concerns
The case report revealed that when researchers asked ChatGPT 3.5 similar diet-replacement questions, it also suggested bromide without a strong warning about its dangers. Medical professionals stress that such advice should always be contextualised and accompanied by safety disclaimers — something AI still struggles to deliver consistently.
The authors warned:
“AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
Recovery and Lessons Learned
The patient underwent aggressive intravenous hydration and electrolyte correction. Over time, his mental state returned to normal, and he was discharged without antipsychotic medication. Two weeks later, he remained stable.
The incident underscores a critical takeaway: while AI can be a powerful tool for health education, it is not a substitute for qualified medical advice.
AI Health Advice: Not Always Wrong, But Risky
Earlier in 2025, a 27-year-old Paris woman credited ChatGPT for flagging possible blood cancer after she described symptoms like night sweats and persistent itching. Later diagnosed with Hodgkin lymphoma, she began treatment — but stressed the importance of following up AI-generated insights with professional medical evaluation.
OpenAI’s New Safety Measures for ChatGPT
In response to growing safety concerns, OpenAI announced on August 4, 2025, that it is tightening mental health guardrails for ChatGPT. The chatbot will now:
- Encourage users to take breaks.
- Avoid giving advice on high-stakes personal decisions.
- Provide evidence-based resources instead of emotional counselling.
This follows scrutiny over instances where earlier AI models offered overly agreeable responses instead of identifying emotional distress or dangerous behaviour. Research also shows that AI struggles with emotional nuance and crisis management.
Final Word
This case is a sobering reminder that while AI offers vast information access, blindly following its advice can have dangerous consequences. For health-related changes, consultation with a qualified professional is not optional — it’s essential.