Launched in November 2022, ChatGPT is a chatbot that not only enables human-like conversations, but also provides accurate answers to questions across a wide range of knowledge domains. The chatbots created by OpenAI are based on a family of “large scale language models”. This algorithm is capable of recognizing, predicting, and generating text based on patterns identified in datasets containing hundreds of millions of words.
and study Researchers appearing in PLOS Digital Health this week report that ChatGPT performed at or near passing marks on the United States Medical Licensing Examination (USMLE). The USMLE is a comprehensive her three-part exam that physicians must pass before practicing medicine in the United States.in an editorial Accompanying this paper, Leo Anthony Cheri, Principal Investigator of the Institute for Medical Engineering Sciences at MIT, Practitioner at Beth Israel Deaconess Medical Center, and Associate Professor at Harvard Medical School, and his co-authors, Success claims: The trial should serve as a wake-up call to the medical community.
Q: What do you think the success of ChatGPT at USMLE reveals about the nature of medical education and student assessment?
A: Framing medical knowledge as something that can be encapsulated in multiple-choice questions creates a false cognitive framing of beliefs. Medical knowledge is often taught as fixed model representations of health and disease. Treatment effects are shown to be stable over time despite constantly changing practice patterns. Mechanical models are not so much about how reliably they were derived, the uncertainties that remain around them, and how they need to be recalibrated to reflect progress worth incorporating into practice. It is passed from teacher to student with little emphasis.
ChatGPT is a test that rewards memorizing the components of a system rather than analyzing how it works, how it fails, how it was created and how it is maintained. passed the Its success shows some shortcomings in the way we train and evaluate medical students. Critical thinking requires understanding that the ground truths in medicine are constantly changing, and more importantly, how and why they change.
Q: What steps do you think the medical community should take to change the way students are taught and assessed?
A: Learning is about using the current body of knowledge, understanding its gaps, and trying to fill those gaps. It requires being comfortable with uncertainty and being able to scrutinize it. We fail as teachers by not teaching our students how to understand the gaps in our current body of knowledge. When we preach certainty over curiosity and arrogance over humility, we fail them.
Medical education also needs to recognize biases in the way medical knowledge is produced and validated. These biases are best addressed by optimizing cognitive diversity within communities. More than ever, we need to stimulate interdisciplinary, collaborative learning and problem-solving. A medical student needs data science her skills that every clinician can contribute to, continuously evaluate and recalibrate medical knowledge.
Q: Are there benefits from ChatGPT’s success in this trial? Are there beneficial ways ChatGPT and other forms of AI can contribute to medical practice?
A: Large-scale language models (LLMs) such as ChatGPT are undoubtedly very powerful tools for filtering content and extracting knowledge beyond the capabilities of experts and expert groups. However, before leveraging LLM and other artificial intelligence techniques, the issue of data bias must be addressed. The bodies of knowledge that LLMs train in are dominated by content and research from well-funded institutions in high-income countries, both medical and non-medical. It doesn’t represent most of the world.
We also learned that even mechanical models of health and disease can be biased. The ground truth in medicine is constantly changing, and currently there is no way to determine when the ground truth has changed. LLM does not assess the quality and bias of the content it trains on. Nor does it provide a level of uncertainty about the output. But perfection should not be the enemy of good. Known to be tainted with unconscious bias, there is a huge opportunity to improve the way healthcare providers make clinical decisions. Optimize data input and AI will undoubtedly deliver on its promise.