24/7 Space News
ROBO SPACE
Leo Anthony Celi on ChatGPT and medicine
The successful performance of ChatGPT on the U.S. Medical Licensing Exam demonstrates shortcomings in how medical students are trained and evaluated, says Leo Anthony Celi, a principal research scientist at MIT's Institute for Medical Engineering and Science and a practicing physician.
Leo Anthony Celi on ChatGPT and medicine
by Anne Trafton for MIT News
Boston MA (SPX) Feb 10, 2023

Launched in November 2022, ChatGPT is a chatbot that can not only engage in human-like conversation, but also provide accurate answers to questions in a wide range of knowledge domains. The chatbot, created by the firm OpenAI, is based on a family of "large language models" - algorithms that can recognize, predict, and generate text based on patterns they identify in datasets containing hundreds of millions of words.

In a study appearing in PLOS Digital Health this week, researchers report that ChatGPT performed at or near the passing threshold of the U.S. Medical Licensing Exam (USMLE) - a comprehensive, three-part exam that doctors must pass before practicing medicine in the United States.

In an editorial accompanying the paper, Leo Anthony Celi, a principal research scientist at MIT's Institute for Medical Engineering and Science, a practicing physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, and his co-authors argue that ChatGPT's success on this exam should be a wake-up call for the medical community.

Q: What do you think the success of ChatGPT on the USMLE reveals about the nature of the medical education and evaluation of students?

A: The framing of medical knowledge as something that can be encapsulated into multiple choice questions creates a cognitive framing of false certainty. Medical knowledge is often taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite constantly changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and how they must be recalibrated to reflect advances worthy of incorporation into practice.

ChatGPT passed an examination that rewards memorizing the components of a system rather than analyzing how it works, how it fails, how it was created, how it is maintained. Its success demonstrates some of the shortcomings in how we train and evaluate medical students. Critical thinking requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

Q: What steps do you think the medical community should take to modify how students are taught and evaluated?

A: Learning is about leveraging the current body of knowledge, understanding its gaps, and seeking to fill those gaps. It requires being comfortable with and being able to probe the uncertainties. We fail as teachers by not teaching students how to understand the gaps in the current body of knowledge. We fail them when we preach certainty over curiosity, and hubris over humility.

Medical education also requires being aware of the biases in the way medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity within the community. More than ever, there is a need to inspire cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that will allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

Q: Do you see any upside to ChatGPT's success in this exam? Are there beneficial ways that ChatGPT and other forms of AI can contribute to the practice of medicine?

A: There is no question that large language models (LLMs) such as ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, or even groups of experts, and extracting knowledge. However, we will need to address the problem of data bias before we can leverage LLMs and other artificial intelligence technologies. The body of knowledge that LLMs train on, both medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It is not representative of most of the world.

We have also learned that even mechanistic models of health and disease may be biased. These inputs are fed to encoders and transformers that are oblivious to these biases. Ground truths in medicine are continuously shifting, and currently, there is no way to determine when ground truths have drifted. LLMs do not evaluate the quality and the bias of the content they are being trained on.

Neither do they provide the level of uncertainty around their output. But the perfect should not be the enemy of the good. There is tremendous opportunity to improve the way health care providers currently make clinical decisions, which we know are tainted with unconscious bias. I have no doubt AI will deliver its promise once we have optimized the data input.

Research Report:"ChatGPT passing USMLE shines a spotlight on the flaws of medical education."

Related Links
Laboratory for Computational Physiology
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Solving a machine-learning mystery
Boston MA (SPX) Feb 08, 2023
Large language models like OpenAI's GPT-3 are massive neural networks that can generate human-like text, from poetry to programming code. Trained using troves of internet data, these machine-learning models take a small bit of input text and then predict the text that is likely to come next. But that's not all these models can do. Researchers are exploring a curious phenomenon known as in-context learning, in which a large language model learns to accomplish a task after seeing only a few examples ... read more

ROBO SPACE
Design a spacesuit for ESA

Setting sail for safer space

NASA names first person of Hispanic heritage as chief astronaut

UAE 'Sultan of Space' grapples with Ramadan fast on ISS

ROBO SPACE
SpaceX to test-fire all 33 Starship booster engines Thursday

Launches of Busek Thrusters push OneWeb constellation towards completion

Poland's SatRev signs on for future Virgin Orbit flights

First step toward predicting lifespan of electric space propulsion systems

ROBO SPACE
Mars Helicopter at Three Forks

Searching for a Drill Site Near Encanto: Sols 3735-3736

Enchanting Encanto Calls: Sols 3732-3734

Curiosity Roundup Sols 3725-3731

ROBO SPACE
China's Deep Space Exploration Lab eyes top global talents

Chinese astronauts send Spring Festival greetings from space station

China to launch 200-plus spacecraft in 2023

China's space industry hits new heights

ROBO SPACE
OneWeb and Kazakhstan National Railways to work together

Sidus Space closes public offering

Iridium GO exec redefines personal off-the-grid connectivity

ATLAS works with AWS to advance federated network and expand ground station coverage

ROBO SPACE
High efficiency mid- and long-wave optical parametric oscillator pump source and its applications

Automating the math for decision-making under uncertainty

Understanding laser accelerated electron radiation through terahertz emissions

Turkey's once mighty developers under fire after quake

ROBO SPACE
A nearby potentially habitable Earth-mass exoplanet

Two nearby exoplanets might be habitable

Will machine learning help us find extraterrestrial life

AI joins search for ET

ROBO SPACE
NASA's Juno Team assessing camera after 48th flyby of Jupiter

Webb spies Chariklo ring system with high-precision technique

Europe's JUICE spacecraft ready to explore Jupiter's icy moons

Exotic water ice contributes to understanding of magnetic anomalies on Neptune and Uranus

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.