New Delhi, October 11 (IANS). A study has given a big warning on Friday. The study says patients should not trust artificial intelligence (AI) chatbots for medication information. AI powered search engines and chatbots may not always provide accurate and safe information about medicines.
Researchers in Belgium and Germany conducted this study after they realized that many of the answers were wrong or potentially harmful.
In a research paper published in the journal BMJ Quality and Safety, he said that the complexity of the answers given by AI chatbots can be difficult to understand and may require degree level education to understand them.
The year 2023 saw a significant change in search engines with the introduction of AI-powered chatbots. New versions have provided better search results, detailed answers, and a new type of interactive experience.
The team from Germany’s Friedrich-Alexander-University Erlangen-Nuremberg said chatbots could have access to the most detailed datasets on the Internet. They are trained on these and can answer any health related questions, but their information can be very inaccurate and even harmful.
In this cross-sectional study, we observed that search engines with AI-powered chatbots are able to provide complete and accurate answers to patients’ questions.
Researchers asked a chatbot (Bing Copilot) what it could tell about the 50 most commonly prescribed drugs in the US. They then observed how easy to understand, complete and accurate the chatbot’s responses were.
Only half of the ten questions were answered most fully. Additionally, 26 percent of chatbot responses did not match the reference data and in more than 3 percent of cases, responses were completely inconsistent.
About 42 percent of these chatbot responses were likely to cause moderate or mild harm and 22 percent were likely to cause serious harm. A major drawback that the team found is that the chatbot is unable to understand the intention behind the patient’s question.
“Despite their potential, it is still important that patients consult their health professionals. Chatbots may not always provide information without error,” the researchers said.
–IANS
FZ/AS