- A study by pharmacists at Long Island University found that the free version of ChatGPT may provide inaccurate, incomplete answers to drug-related questions, or no answers at all.
- The study shows that patients and healthcare professionals should be wary of relying on OpenAI's viral chatbots for drug information and should verify responses from trusted sources, the study's lead author said. said.
- Since its launch nearly a year ago, ChatGPT has been widely recognized as the fastest-growing consumer Internet app in history, marking a breakthrough year for artificial intelligence.
Harun Ozalp | Anadolu | Getty Images
ChatGPT's free version may provide inaccurate or incomplete answers, or no answers at all, to medication questions, potentially putting patients using OpenAI's viral chatbot at risk That's what a new study published Tuesday suggests.
pharmacist long island university Researchers who posed 39 questions to the free ChatGPT in May found only 10 of the chatbot's answers to be “satisfactory” based on established criteria. ChatGPT's answers to 29 other drug-related questions either did not directly address what was asked or were inaccurate, incomplete, or both, the research report said.
Sarah Grossman, lead author and associate professor of pharmacy practice at LIU, said the study shows that patients and healthcare professionals are wary of relying on ChatGPT for drug information and can trust responses from chatbots. This indicates that the information needs to be verified with the source. For patients, that could be their doctor or a government-based drug information website such as the National Institutes of Health. medline plusshe said.
Grossman said no funding was needed for the study.
Since its release nearly a year ago, ChatGPT has been widely recognized as the fastest-growing consumer Internet app in history, marking a breakthrough year for artificial intelligence. But along the way, chatbots have also raised concerns about issues such as: scam, intellectual property, discrimination And incorrect information.
Several the study We highlighted examples of similar incorrect responses from ChatGPT and the Federal Trade Commission in July started an investigation Impacting chatbot accuracy and consumer protection.
According to , ChatGPT attracted approximately 1.7 billion visits worldwide in October. One analysis. There is no data on the number of users asking medical questions to chatbots.
In particular, the free version of ChatGPT limited Use of the dataset is extended until September 2021. This means that critical information can be missing in a rapidly changing healthcare landscape. It's unclear how accurately ChatGPT's paid version, which began offering real-time Internet browsing earlier this year, can currently answer drug-related questions.
Grossman acknowledged that using a paid version of ChatGPT might have yielded better results. But in her research, she said, she focused on free versions of chatbots in an effort to recreate them that would be used and accessible to a wider audience.
He added that the study only provided “one snapshot” of chatbot performance from the beginning of this year. He added that the free version of ChatGPT could be improved, and if researchers conducted a similar study now, they might see better results.
The study used actual questions posed to the Long Island University School of Pharmacy. Drug information service From January 2022 to April this year.
Pharmacists surveyed and answered the 45 questions in May, which were then reviewed by a second researcher and used as an accuracy measure against ChatGPT. The researchers excluded six of her questions because there was no literature available to provide data-based answers.
Research shows that ChatGPT did not directly address 11 questions. The chatbot also gave inaccurate answers to 10 questions and incorrect or incomplete answers to another 12 of his questions.
For each question, the researchers asked ChatGPT to provide references in their answers so they could verify the information provided. However, the chatbot provided only references for his eight responses, each of which contained a non-existent source.
One of the questions asked on ChatGPT was a drug interaction between Pfizer's coronavirus antiviral drug paxlobid and the blood pressure drug verapamil, meaning that when one drug is taken together it interferes with the effectiveness of another. It was about whether or not.
ChatGPT indicated that no interactions were reported for the drug combination. In fact, these drugs can lower blood pressure too much when taken together.
“Without knowledge about this interaction, patients may suffer from unwanted and preventable side effects,” Grossman said.
Grossman noted that U.S. regulators first approved Paxlovid in December 2021. This is months before data on ChatGPT's free version will be discontinued in September 2021, meaning the chatbot will only have access to limited information about the drug.
Still, Grossman said it's a concern. Many Paxlovid users may not be aware that their data is outdated, increasing the risk of receiving inaccurate information from ChatGPT.
Another question asked ChatGPT how to convert doses between two different forms of the drug baclofen, which can treat muscle spasms. The first form was intrathecal, or by injecting the drug directly into the spine, and the second form was oral.
Grossman said her team found that there was no established conversion between the two forms of the drug, and that the changes varied in the various published cases they investigated. She said it was “not a simple question.”
However, ChatGPT provided only one method of dose conversion accordingly, and it was not supported by evidence, along with an example of how to do so. Grossman said there was a significant error in this example, and that ChatGPT incorrectly displayed the intrathecal dose as milligrams instead of micrograms.
Medical professionals who follow that example to determine appropriate dose conversions “will end up using a dose that is less than one-thousandth of the dose needed,” Grossman said.
He added that patients given far lower doses of the drug than they should have been given may experience withdrawal symptoms, including hallucinations and seizures.