AI chatbots can run with medical misinformation, study finds, highlighting the need for stronger safeguards
Peer-Reviewed Publication
Updates every hour. Last Updated: 12-Sep-2025 03:11 ET (12-Sep-2025 07:11 GMT/UTC)
A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information, revealing a critical need for stronger safeguards before these tools can be trusted in health care. The researchers also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves. Their findings were detailed in the August 2 online issue of Communications Medicine [https://doi.org/10.1038/s43856-025-01021-3].
A new study meticulously sampled different lung regions in people with cystic fibrosis to understand why infections persist after treatment with new drugs called modulators. These drugs are not curative, but help improve symptoms by addressing the underlying physiological flaw in this genetic condition. Unexpectedly, the new findings suggest that lung damage might not be the main cause of infection persistence. It might be possible that the bacteria is adapting in new ways to resist clearance even when the lungs are being treated with the best drugs available.
A new discovery by Van Andel Institute scientists reveals that glucose, an essential cellular fuel that powers immune cells, also aids in T cells’ internal communication and boosts their cancer-fighting properties. The findings may help optimize T cells’ ability to combat cancer and other diseases.