Business Report

How ChatGPT's medical advice nearly cost a man his life

Sarene Kloren|Published

Can AI chatbots be really trusted with our health?

Image: Pexels

The digital revolution has brought with it an unprecedented ease of access to information.

However, as our reliance on artificial intelligence (AI) grows, a new, potentially lethal threat has emerged: the dangerously persuasive power of AI chatbots when it comes to our health. 

While a quick search for symptoms online has long been a source of anxiety, a recent, alarming case highlights how generative AI can lead to severe, real-world harm.

In a recent case, a 60-year-old man, seeking to reduce his salt intake, turned to ChatGPT for dietary advice instead of consulting a medical professional. 

The AI chatbot suggested he replace sodium chloride (common table salt) with sodium bromide.

Sodium bromide, while having some historic medicinal uses, is now primarily an industrial chemical and is toxic in large doses. 

Believing the AI's confident assertion, the man sourced the compound online and began consuming it for three months. 

What followed was a terrifying descent into bromide toxicity, a condition now so rare it is almost unheard of.

After three months, the man was admitted to hospital with a cocktail of severe symptoms, including paranoia, auditory and visual hallucinations, extreme thirst and ataxia, a neurological condition affecting muscle coordination. 

The man, with no prior psychiatric history, became so suspicious that he believed his neighbour was poisoning him and even refused water from hospital staff. 

His condition only improved after being treated with fluids and electrolytes in the hospital's inpatient psychiatric unit, where he was finally diagnosed.

The doctors who treated the man noted that his symptoms, which also included acne and small, red growths on the skin were classic signs of bromism. 

This case serves as a stark warning of the problem with AI; an incorrect question can lead to a dangerously incorrect answer, which the AI presents with an air of absolute authority.

Widespread trust, widespread risk

Despite such alarming incidents, a 2025 survey revealed that significant numbers of the public already trusts AI for health advice. 

A recent poll, which surveyed adults in the United States, found that while trust in AI for health-related information (29%) is significantly lower than for doctors (93%), it still outranks influencers (19%) and social media platforms (9%). 

A third of Americans reported using AI to manage aspects of their health.

This growing reliance is fuelled by the convenience and accessibility of chatbots, with many finding it easier to ask AI specific questions than to use a traditional search engine or book a doctor's appointment. 

The World Health Organization (WHO) has echoed these concerns, calling for caution in using AI tools for health purposes. 

The developers of ChatGPT, OpenAI, are explicit in their terms of use: the AI is "not intended for use in the diagnosis or treatment of any health condition." 

However, this disclaimer is easily overlooked by users seeking a shortcut to professional advice.

IOL Lifestyle

Get your news on the go, click here to join the IOL News WhatsApp channel.