Loading Articles!

When AI Goes Wrong: ChatGPT's Health Advice Puts Man in Hospital!

Giovanni Rossi
Giovanni Rossi
"Wow, this is a wake-up call! AI isn't always right."
Hikari Tanaka
Hikari Tanaka
"How could he trust AI over a doctor's advice? Just wild!"
John McGregor
John McGregor
"I hope he learned his lesson! Always consult a professional."
Thelma Brown
Thelma Brown
"Imagine cooking with bromide! That's a recipe for disaster!"
Nguyen Minh
Nguyen Minh
"This story is insane! AI can really mess you up if you're not careful."
Ivan Petrov
Ivan Petrov
"I guess ChatGPT isn’t a nutritionist after all. 😂"
Darnell Thompson
Darnell Thompson
"Is it just me, or are we trusting AI too much?"
Samuel Okafor
Samuel Okafor
"I wonder how many others have made similar mistakes. 🤔"
Mei Lin
Mei Lin
"Health advice from AI? Sounds like a bad sitcom plot."
Darnell Thompson
Darnell Thompson
"What’s next? Asking ChatGPT for surgery tips?"
Samuel Okafor
Samuel Okafor
"Bromide? That's a throwback! Glad he's okay now."

2025-08-09T04:33:00Z


Did you know that following AI advice can land you in the hospital? A 60-year-old man from New York learned this lesson the hard way after turning to ChatGPT for health guidance. What he thought was a simple salt-reduction plan spiraled into a medical nightmare, showcasing the dangers of trusting AI without human oversight.

The man had asked ChatGPT how to eliminate sodium chloride, commonly known as table salt, from his diet. Instead of sound dietary advice, the AI recommended sodium bromide—an outdated compound that can be toxic in large doses. Believing he was making a healthy choice, the man replaced salt with this dangerous alternative for three months, leading to a medical emergency.

Doctors reported that he developed dangerously low sodium levels, a condition known as hyponatraemia, which can cause severe health issues. His family revealed he had relied on the AI-generated health plan without consulting a doctor, allowing him to dangerously cut sodium from his diet.

When he was finally hospitalized, he exhibited alarming symptoms, including hallucinations, paranoia, and extreme thirst. In a state of confusion, he even refused water, fearing it was contaminated. The doctors diagnosed him with bromide toxicity, a condition that was once common in the early 20th century but is now rare. Alongside the psychological effects, he also developed skin eruptions and distinctive red spots, known as cherry angiomas, all signs of bromism.

After spending three weeks in the hospital, where medical staff focused on rehydration and restoring his electrolyte levels, the man eventually recovered. This alarming case, recently published in the American College of Physicians journal, highlights the urgent need for critical thinking when interpreting AI-generated advice, particularly regarding health.

The authors of the study warned of the growing risks of health misinformation emitted by AI systems. They emphasized that while AI tools like ChatGPT can provide general information, they should never take the place of professional medical consultation. OpenAI has clearly stated in its Terms of Use that users should not rely on its outputs as a sole source of truth or as a substitute for professional medical advice.

As AI adoption continues to grow, so too does the responsibility to ensure that its outputs are accurate and understood by users. In these times of rapid technological advancement, the ultimate authority on health should always remain with qualified professionals.

Profile Image Malik Johnson

Source of the news:   The Times of India

BANNER

    This is a advertising space.

BANNER

This is a advertising space.