Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Science»Virtual AI Radiologist: ChatGPT Passes Radiology Board Exam
    Science

    Virtual AI Radiologist: ChatGPT Passes Radiology Board Exam

    By Radiological Society of North AmericaMay 20, 20235 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    ChatGPT Pass Test Graduation
    The latest version of AI chatbot ChatGPT passed a radiology board-style exam, with the new GPT-4 model correctly answering 81% of questions, up from GPT-3.5’s 69%. However, issues such as struggles with higher-order thinking questions and occasional generation of incorrect responses, pose limitations to its wider adoption in medical education and practice.

    The most recent version of ChatGPT, an AI chatbot developed for language interpretation and response generation, has successfully passed a radiology board-style exam, demonstrating both its potential and limitations, according to research studies published in the Radiological Society of North America’s journal.

    The latest version of ChatGPT passed a radiology board-style exam, highlighting the potential of large language models but also revealing limitations that hinder reliability, according to two new research studies published in Radiology, a journal of the Radiological Society of North America (RSNA).

    ChatGPT is an artificial intelligence (AI) chatbot that uses a deep learning model to recognize patterns and relationships between words in its vast training data to generate human-like responses based on a prompt. But since there is no source of truth in its training data, the tool can generate responses that are factually incorrect.

    “The use of large language models like ChatGPT is exploding and only going to increase,” said lead author Rajesh Bhayana, M.D., FRCPC, an abdominal radiologist and technology lead at University Medical Imaging Toronto, Toronto General Hospital in Toronto, Canada. “Our research provides insight into ChatGPT’s performance in a radiology context, highlighting the incredible potential of large language models, along with the current limitations that make it unreliable.”

    ChatGPT was recently named the fastest growing consumer application in history, and similar chatbots are being incorporated into popular search engines like Google and Bing that physicians and patients use to search for medical information, Dr. Bhayana noted.

    To assess its performance on radiology board exam questions and explore strengths and limitations, Dr. Bhayana and colleagues first tested ChatGPT based on GPT-3.5, currently the most commonly used version. The researchers used 150 multiple-choice questions designed to match the style, content and difficulty of the Canadian Royal College and American Board of Radiology exams.

    The questions did not include images and were grouped by question type to gain insight into performance: lower-order (knowledge recall, basic understanding) and higher-order (apply, analyze, synthesize) thinking. The higher-order thinking questions were further subclassified by type (description of imaging findings, clinical management, calculation and classification, disease associations).

    The performance of ChatGPT was evaluated overall and by question type and topic. Confidence of language in responses was also assessed.

    Performance on Lower-Order and Higher-Order Thinking

    The researchers found that ChatGPT based on GPT-3.5 answered 69% of questions correctly (104 of 150), near the passing grade of 70% used by the Royal College in Canada. The model performed relatively well on questions requiring lower-order thinking (84%, 51 of 61), but struggled with questions involving higher-order thinking (60%, 53 of 89). More specifically, it struggled with higher-order questions involving description of imaging findings (61%, 28 of 46), calculation and classification (25%, 2 of 8), and application of concepts (30%, 3 of 10). Its poor performance on higher-order thinking questions was not surprising given its lack of radiology-specific pretraining.

    GPT-4 was released in March 2023 in limited form to paid users, specifically claiming to have improved advanced reasoning capabilities over GPT-3.5.

    In a follow-up study, GPT-4 answered 81% (121 of 150) of the same questions correctly, outperforming GPT-3.5 and exceeding the passing threshold of 70%. GPT-4 performed much better than GPT-3.5 on higher-order thinking questions (81%), more specifically those involving description of imaging findings (85%) and application of concepts (90%).

    The findings suggest that GPT-4’s claimed improved advanced reasoning capabilities translate to enhanced performance in a radiology context. They also suggest improved contextual understanding of radiology-specific terminology, including imaging descriptions, which is critical to enable future downstream applications.                  

    “Our study demonstrates an impressive improvement in performance of ChatGPT in radiology over a short time period, highlighting the growing potential of large language models in this context,” Dr. Bhayana said.

    Concerns Over Hallucinations and Reliability

    GPT-4 showed no improvement on lower-order thinking questions (80% vs 84%) and answered 12 questions incorrectly that GPT-3.5 answered correctly, raising questions related to its reliability for information gathering.

    “We were initially surprised by ChatGPT’s accurate and confident answers to some challenging radiology questions, but then equally surprised by some very illogical and inaccurate assertions,” Dr. Bhayana said. “Of course, given how these models work, the inaccurate responses should not be particularly surprising.”

    ChatGPT’s dangerous tendency to produce inaccurate responses, termed hallucinations, is less frequent in GPT-4 but still limits usability in medical education and practice at present.

    Both studies showed that ChatGPT used confident language consistently, even when incorrect. This is particularly dangerous if solely relied on for information, Dr. Bhayana notes, especially for novices who may not recognize confident incorrect responses as inaccurate.

    “To me, this is its biggest limitation. At present, ChatGPT is best used to spark ideas, help start the medical writing process and in data summarization. If used for quick information recall, it always needs to be fact-checked,” Dr. Bhayana said.

    References:

    “Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations” by Rajesh Bhayana, Satheesh Krishna and Robert R. Bleakney, 16 May 2023, Radiology.
    DOI: 10.1148/radiol.230582

    “GPT-4 in Radiology: Improvements in Advanced Reasoning” by Rajesh Bhayana, Robert R. Bleakney and Satheesh Krishna, 16 May 2023, Radiology.
    DOI: 10.1148/radiol.230987

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence ChatGPT Radiological Society of North America
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Even Doctors Can’t Tell These AI X-Rays Are Fake

    This One Twist Was Enough to Fool ChatGPT – And It Could Cost Lives

    Is AI Funnier Than Humans? Research Suggests Yes

    Can AI Spot Liars via Facial Expressions?

    New Chip Reduces Neural Networks’ Power Consumption by 95 Percent

    Engineers Develop Automated Process for Discovering Optimal Structure for Metamaterials

    MIT Launches Intelligence Quest To Advance Human and Machine Intelligence Research

    New Artificial Intelligence System To Aid In Materials Fabrication

    New Algorithm Lets Robots Autonomously Plan for Tasks

    5 Comments

    1. Jojo on May 20, 2023 1:49 pm

      I want to have one of these AI’s reinterpret selected MRI/CT scan images and produce a report where the high-falutin medical terminology (apparently used to impress patients) are translated to common English language.

      Reply
    2. Jaymes Moynihan on May 21, 2023 11:11 am

      Saying ChatGPT passed a radiology exam. Is like saying a Law Library passed the bar exam.

      Reply
    3. Ann on May 22, 2023 3:30 am

      I don’t know about radiology but in many other medical specialities Chat GPT is misleading and giving wrong information

      Reply
    4. Jojo on May 31, 2023 1:24 pm

      Ann wrote “I don’t know about radiology but in many other medical specialities Chat GPT is misleading and giving wrong information”
      ——
      Which is no different that what happens on any internet forum where probably 60% of the posts are clueless and incorrect.

      If you train an AI with incorrect info, then you shouldn’t be surprised when it produces incorrect answers!

      Reply
    5. David Marsh on September 6, 2023 4:54 am

      Great post! AI can be a valuable tool in healthcare, it’s essential to understand that it typically complements the work of radiologists rather than replacing them. Radiologists bring extensive medical knowledge and clinical expertise to the interpretation of medical images, and AI can serve as an aid in the decision-making process. If there have been recent developments where AI systems like ChatGPT have passed radiology board exams or achieved significant milestones in this field.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Millions Take These IBS Drugs, But a New Study Finds Serious Risks

    Scientists Unlock Hidden Secrets of 2,300-Year-Old Mummies Using Cutting-Edge CT Scanner

    Bread Might Be Making You Gain Weight Even Without Eating More Calories

    Scientists Discover Massive Magma Reservoir Beneath Tuscany

    Europe’s Most Active Volcano Just Got Stranger – Here’s Why Scientists Are Rethinking It

    Alzheimer’s Symptoms May Start Outside the Brain, Study Finds

    Millions Take This Popular Supplement – Scientists Discover a Concerning Link to Heart Failure

    The Universe Is Expanding Too Fast and Scientists Can’t Explain Why

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • U.S. Waste Holds $5.7 Billion Worth of Crop Nutrients
    • Scientists Say a Hidden Structure May Exist Inside Earth’s Core
    • Doctors Surprised by the Power of a Simple Drug Against Colon Cancer
    • Why Popular Diabetes Drugs Like Ozempic Don’t Work for Everyone: The “Genetic Glitch”
    • Scientists Create Improved Insulin Cells That Reverse Diabetes in Mice
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.