ChatGPT, Gemini, and other AI bots give bad medical tips half the time

People already use AI chatbots like search engines for everyday health information. That habit looks riskier after a new study found that half of the answers from five major bots were problematic, even when the replies sounded polished and confident.

Researchers tested ChatGPT, Gemini, Grok, Meta AI, and DeepSeek with 250 prompts across cancer, vaccines, stem cells, nutrition, and athletic performance.

Recommended Videos

The prompts reflected common health queries and familiar misinformation themes, then measured whether the bots stayed aligned with scientific evidence or drifted into misleading and potentially unsafe advice.

Broad questions exposed the biggest gaps

The weakest results came from open-ended prompts. Those broader questions produced far more highly problematic answers than expected, while closed prompts were more likely to produce safer responses.

That matters because real people usually don’t ask medical questions in a tidy, multiple-choice format.

...

Keep reading this article on Digital Trends.