Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.
Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.
“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.
Isn’t it too easy for the current chatbots/LLMs to lie about everything?
Train it on garbage or in the wrong way, and it will agree on anything you want it to.
I asked DeepSeek about what to visit nearby and to give me some URLs and it hallucinated the URLs and places. Guess it wasn’t trained to know anything about my local area.