Comparing the Quality and Readability of Postoperative Instructions for Inferior Turbinate Reduction: Insights from ChatGPT and Google

Research output: Contribution to conferencePosterpeer-review

Abstract

Introduction: This study aims to compare the quality of postoperative instructions for inferior turbinate reduction surgery obtained from ChatGPT and Google, using the DISCERN tool and readability assessments. The primary objective is to evaluate the reliability and quality of information provided by these platforms, with secondary outcomes assessing readability metrics.

Methods: Postoperative instructions were retrieved from ChatGPT and Google using standardized prompts to simulate common patient queries. Google searches were conducted in a cleared browser environment, and the first 10 nonsponsored results were extracted. Instructions were anonymized, stripped of audiovisual elements, and standardized for analysis. Data were scored independently by two reviewers using the DISCERN tool, with discrepancies resolved by a third reviewer. Readability was assessed using the Flesch-Kincaid Grade formula. Statistical analyses will involve ANOVA and KruskalWallis tests to compare scores across sources, with significance set at P < 0.05.

Preliminary Results: Eleven responses were included in the preliminary analysis, with source distribution as follows: ChatGPT (n = 1), private clinical/hospital websites (n = 7), academic/institutional sources (n = 2), and an online blog/forum (n = 1). Audiovisual aids were present in 63.64% of responses, while advertisements and distractors appeared in 27.27% and 63.64% of responses, respectively.

Future Steps: Further steps will involve conducting detailed statistical analyses. Comparative subgroup analyses will evaluate differences between ChatGPT and Google responses in terms of quality and readability metrics. Findings will provide insights into the potential for AI-generated content to enhance patient education.

Conclusions: Preliminary results suggest variability in the quality and readability of postoperative instructions across sources. Most materials were written at a high reading level, with private clinical/hospital websites being the most common source. This study highlights the need for more accessible and reliable patient education resources and explores the potential of AI-based solutions like ChatGPT to address these gaps.
Original languageAmerican English
StatePublished - 14 Feb 2025
EventOklahoma State University Center for Health Sciences Research Week 2025 - Oklahoma State University Center for Health Sciences, Tulsa, United States
Duration: 10 Feb 202514 Feb 2025
https://medicine.okstate.edu/research/research_days.html

Conference

ConferenceOklahoma State University Center for Health Sciences Research Week 2025
Country/TerritoryUnited States
CityTulsa
Period10/02/2514/02/25
Internet address

Keywords

  • readability
  • ChatGPT
  • postoperative
  • analysis

Fingerprint

Dive into the research topics of 'Comparing the Quality and Readability of Postoperative Instructions for Inferior Turbinate Reduction: Insights from ChatGPT and Google'. Together they form a unique fingerprint.

Cite this