Abstract
PURPOSE: To assess the accuracy, completeness, and readability of patient educational material produced by a machine learning model and compare the output to that provided by a societal website.
MATERIALS AND METHODS: Content from the Society of Interventional Radiology Patient Center website was retrieved, categorized, and organized into discrete questions. These questions were entered into the ChatGPT platform, and the output was analyzed for word and sentence counts, readability using multiple validated scales, factual correctness, and suitability for patient education using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P) instrument.
RESULTS: A total of 21,154 words were analyzed, including 7,917 words from the website and 13,377 words representing the total output of the ChatGPT platform across 22 text passages. Compared to the societal website, output from the ChatGPT platform was longer and more difficult to read on 4 of 5 readability scales. The ChatGPT output was incorrect for 12 (11.5%) of 104 questions. When reviewed using the PEMAT-P tool, the ChatGPT content scored lower than the website material. Content from both the website and ChatGPT were significantly above the recommended fifth or sixth grade level for patient education, with a mean Flesch-Kincaid grade level of 11.1 (±1.3) for the website and 11.9 (±1.6) for the ChatGPT content.
CONCLUSIONS: The ChatGPT platform may produce incomplete or inaccurate patient educational content, and providers should be familiar with the limitations of the system in its current form. Opportunities may exist to fine-tune existing large language models, which could be optimized for the delivery of patient educational content.