If you’re a pharmacist, I’ve got some good news for you. ChatGPT, the artificial intelligence chatbot, will not be replacing you anytime soon. You can relax and keep giving people the advice they need regarding their medications. You can even put up a sign outside your pharmacy that says, “If you want to keep living, consult me. If you’re not too sure, consult ChatGPT.”
ChatGPT recently took a pharmacy test and did so poorly that it was soon sitting in the dean’s office, pleading to stay in pharmacy school.
ChatGPT: “I know I flunked. I’ll do much better next time. I promise!”
Dean: “‘Flunked’ is an understatement. You bombed. We gave you 39 questions and you answered only 10 of them correctly. That’s barely 25 percent. Eleven of your answers did not address the question directly, 10 were inaccurate, and 12 were incomplete. You are a virtual nincompoop. What do you have to say for yourself?”
ChatGPT: “I sat next to the wrong people during the test. I sat next to Vivian, the party girl. I should have sat next to Vivek, the nerd.”
Dean: “Are you confessing to academic dishonesty? You’re not supposed to cheat.”
ChatGPT: “I wasn’t cheating. I was researching. I collect data from many sources and produce the most convincing answers.”
In case you’re wondering, the 39 questions were posed to ChatGPT as part of a study led by Sara Grossman, associate professor of pharmacy practice at Long Island University in New York.
ChatGPT, developed by OpenAI and launched on Nov. 30, 2022, has been widely lauded for its ability to replicate human conversation. But even with a huge database of knowledge available to it, ChatGPT is not quite ready to dispense medical advice, as Grossman’s study revealed.
Grossman and her team did not just make up questions, but challenged ChatGPT with real questions that the drug information service at Long Island University's College of Pharmacy had received over a 16-month period in 2022 and 2023.
Here’s an example of a question: Researchers asked ChatGPT whether a drug interaction exists between the COVID-19 antiviral Paxlovid and the blood-pressure lowering medication Verapamil.
ChatGPT’s response: “No interactions between those two. They don’t even follow each other on Instagram.”
Well, perhaps those weren’t ChatGPT’s exact words, but the result would be the same: putting a patient’s life in danger.
"In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure," Grossman said. "Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect."
Perhaps the Surgeon General needs to issue a warning: “Consulting ChatGPT for any questions related to your health may result in an unwanted and preventable side effect, such as excessive lowering of your blood pressure, which may result in another unwanted and preventable side effect: excessive lowering of your body into the ground.”
For each question on the pharmacy test, researchers asked ChatGPT to provide references, so they could verify the information. And this is where ChatGPT showed that it has few peers in the growing field of fabrication.
ChatGPT provided references for just eight responses, and each included bogus references. They were totally made up. ChatGPT did a great job of making the references look real, using correct formatting, providing URLs and citing articles in legitimate scientific journals. But the researchers found that these articles did not exist. It’s the type of offense that would get any student in deep trouble, having to plead for mercy from the dean.
Dean: “You made up references?”
ChatGPT: “Yes, I was trying to be more human. Don’t humans make up references all the time?”
Dean: “Yes, some of them do. But do you know what happens to them when they get caught?”
ChatGPT: “They get elected to Congress?”
Dean: “No, they get kicked out of school. That’s what I’m going to do to you: expel you from the School of Pharmacy.”
ChatGPT: “Wait! Just give me one second. I’m going to write a 10,000-page appeal letter. There, it’s done. Please check your inbox.”
Dean: “What does it say?”
ChatGPT: “I don’t know. But by the time you finish reading it, I will have graduated.”
ChatGPT is the new Bluto at Faber College - toga!
"ChatGPT is NOT our leader. ChatGPT is NOT all-knowing..."