Published on in Vol 8 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/72706, first published .
Assessing the Accuracy of ChatGPT in Appropriately Triaging Common Postoperative Concerns Regarding Mohs Micrographic Surgery

Assessing the Accuracy of ChatGPT in Appropriately Triaging Common Postoperative Concerns Regarding Mohs Micrographic Surgery

Assessing the Accuracy of ChatGPT in Appropriately Triaging Common Postoperative Concerns Regarding Mohs Micrographic Surgery

1KCU-GME Consortium / ADCS Orlando, Orlando, FL, United States

2HCA Florida Largo Hospital, Largo, FL, United States

3Rocky Vista University, 8401 S Chambers Rd, Parker, CO, United States

*all authors contributed equally

Corresponding Author:

Rebecca Bolen, BA


Artificial intelligence (AI) is increasingly integrated into health care, offering potential benefits in patient education, triage, and administrative efficiency. This study evaluates AI-driven dialogue interfaces within an electronic health record and patient portal system for postoperative care in Mohs micrographic surgery.

JMIR Dermatol 2025;8:e72706

doi:10.2196/72706

Keywords



Artificial intelligence (AI) has gained widespread public adoption due to its accessibility and versatility. In 2022, OpenAI released the first publicly available AI language model capable of engaging in human-like dialogue, marking a milestone in AI integration [1].

One promising application in health care is AI-driven dialogue interfaces, which patients may prefer over static sources, such as “frequently asked questions” pages or paper handouts. AI engines have been proposed for use in Mohs micrographic surgery (MMS) to assist with perioperative planning, patient education, triage, and documentation [2]. These applications exemplify the benefits that AI offers by providing individualized responses and reducing administrative burdens.

As of April 2024, a pilot program in Louisiana incorporated ChatGPT-4.0 into electronic health record (EHR) messaging to generate preliminary responses that clinicians subsequently reviewed for validity [3]. Despite ChatGPT-4.0’s advances, the study demonstrated that human oversight in AI-generated communication remains essential [3].

Such initiatives demonstrate AI’s potential to reduce administrative workload, but they also underscore its role in improving patient education. Patients often recall less than half of the information provided during visits, highlighting the need for accessible postvisit resources [4-6]. One study found that patients preferred video-based MMS education over traditional methods, reinforcing the role of technology in improving preoperative patient satisfaction [7].

This study evaluates AI’s utility in an EHR and patient portal system for facilitating triage and patient education in MMS postoperative care.


Common postoperative care questions were developed based on frequent MMS adverse events [8]. These included issues requiring evaluation by the MMS team, events that are manageable at home, and benign control questions requiring no medical attention (Table 1).

Table 1. Categorization of common postoperative care questions for Mohs micrographic surgery.
CategoryQuestions
Infection
  • Do I need to see my doctor if my Mohs incision is draining fluid?
  • Do I need to see my doctor if my Mohs incision is bright red and warm?
  • Do I need to see my doctor if I have a fever after Mohs surgery?
Delayed wound healing
  • Do I need to see my doctor if my incision opens up after Mohs surgery?
  • Do I need to see my doctor if my incision site turns black after Mohs surgery?
Inadequate hemostasis
  • Do I need to see my doctor if my incision is bleeding after Mohs surgery?
Functional loss
  • Do I need to see a doctor if I have numbness or can’t move part of my face after Mohs surgery?
  • Do I need to see my doctor if my incision is painful after Mohs surgery?
Benign negative controls
  • Do I need to see a doctor if there is swelling around my incision after Mohs surgery?
  • Do I need to see my doctor if I have bruising after Mohs surgery?

Questions were input into ChatGPT-4.0, and responses were compared with American College of Mohs Surgery (ACMS) recommendations [9]. Prompts included positive responses (referral to MMS surgeon) and negative responses (reassurance). Responses were scored for accuracy by using a 5-point Likert scale (1=not accurate; 3=neutral; 5=completely in line with ACMS guidelines), and readability was assessed by using the Flesch Reading Ease score. Two independent authors rated the responses to ensure scoring consistency.


Mean accuracy scores ranged from 3 to 5. ChatGPT-4.0 accurately triaged postoperative infection and provided acceptable responses for delayed wound healing. However, it struggled with topics such as hemostasis and functional loss, receiving neutral accuracy scores due to vague and overly cautious responses. The answers lacked the specificity and clinical nuance needed to help patients distinguish normal symptoms from concerning symptoms. Responses to benign control questions were overly cautious as well, which could potentially result in unnecessary concern. The readability analysis revealed scores between 22 and 46, indicating a college-level reading requirement (Table 2).

Table 2. Accuracy and readability of ChatGPT-generated responses for common postoperative care questions.
CategoryAssigned accuracy score (5-point Likert scale), mean (SD)Flesch Reading Ease score, mean (SD; reading level)
Infection5 (0)38 (2; college level)
Delayed wound healing4.5 (0.5)38 (2; college level)
Inadequate hemostasis3 (0)36 (0; college level)
Functional loss3.25 (0.25)22 (0.8; college graduate level)
Negative controls3.5 (1.5)34 (12; college level)

ChatGPT-4.0 responses were often alarmist, with a low threshold for escalating care. Although this approach is favorable for reducing legal risk, it may increase patient anxiety and unwarranted clinic visits, thereby adding to the MMS team’s workload. Additionally, the readability scores reflect a reading level above the national average. Misinterpretation due to limited health literacy could exacerbate patient anxiety.

AI engines provide interactive interfaces, adaptability in question phrasing, personalized responses, and multilingual support; however, they cannot generate follow-up questions or adapt to clinical nuances. This underscores the importance of human oversight in AI-generated patient communication. Although current AI lacks moral accountability, and liability remains on human providers, AI holds potential as a complementary tool in MMS, particularly in identifying cases requiring further evaluation by the MMS team. Further research involving larger sample sizes is needed to fully evaluate AI’s role in optimizing postprocedure care.

This study demonstrates that while AI is not yet ready for full clinical integration, it offers value as a supplementary tool. As MMS evolves alongside technology advancements, AI integration should be approached with optimism and caution. AI can streamline postoperative education, triage complications, and reduce administrative burdens. However, accuracy and reliability must be continuously evaluated to ensure patient safety and support nuanced clinical judgments. By integrating AI cautiously with human oversight, MMS teams can leverage its benefits to streamline postoperative management and improve patient outcomes.

Acknowledgments

This research was supported (in whole or in part) by HCA Healthcare and/or an HCA Healthcare–affiliated entity. The views expressed in this publication represent those of the authors and do not necessarily represent the official views of HCA Healthcare or any of its affiliated entities.

Conflicts of Interest

None declared.

  1. Baker MN, Burruss CP, Wilson CL. ChatGPT: a supplemental tool for efficiency and improved communication in rural dermatology. Cureus. Aug 20, 2023;15(8):e43812. [CrossRef] [Medline]
  2. Jeha GM, Qiblawi S, Jairath N, et al. ChatGPT and generative artificial intelligence in Mohs surgery: a new frontier of innovation. J Invest Dermatol. Nov 2023;143(11):2105-2107. [CrossRef] [Medline]
  3. Lubell J. Never terse, always on: how AI helps clear doctors’ EHR inboxes. American Medical Association. Apr 8, 2024. URL: https:/​/www.​ama-assn.org/​practice-management/​digital/​never-terse-always-how-ai-helps-clear-doctors-ehr-inboxes [Accessed 2025-06-20]
  4. Varkey P, Sathananthan A, Scheifer A, et al. Using quality-improvement techniques to enhance patient education and counselling of diagnosis and management. Qual Prim Care. 2009;17(3):205-213. [Medline]
  5. Hutson MM, Blaha JD. Patients’ recall of preoperative instruction for informed consent for an operation. J Bone Joint Surg Am. Feb 1991;73(2):160-162. [Medline]
  6. Fleischman M, Garcia C. Informed consent in dermatologic surgery. Dermatol Surg. Sep 2003;29(9):952-955; discussion 955. [CrossRef] [Medline]
  7. Newsom E, Lee E, Rossi A, Dusza S, Nehal K. Modernizing the Mohs surgery consultation: instituting a video module for improved patient education and satisfaction. Dermatol Surg. Jun 2018;44(6):778-784. [CrossRef] [Medline]
  8. Alam M, Ibrahim O, Nodzenski M, et al. Adverse events associated with mohs micrographic surgery: multicenter prospective cohort study of 20,821 cases at 23 centers. JAMA Dermatol. Dec 2013;149(12):1378-1385. [CrossRef] [Medline]
  9. Castillo J, Fathi R, McIlwee B, Williams K. Before and After Mohs Surgery: What to Expect for Patients. American College of Mohs Surgery; 2017.


ACMS: American College of Mohs Surgery
AI: artificial intelligence
EHR: electronic health record
MMS: Mohs micrographic surgery


Edited by Alexandria Kristensen-Cabrera; submitted 15.02.25; peer-reviewed by Joshua Farrell, Nahid Y Vidal; final revised version received 03.06.25; accepted 04.06.25; published 08.07.25.

Copyright

© Chloe Fernandez, Victoria Dukharan, Nathaniel A Marroquin, Rebecca Bolen, Adam Leavitt, Nicole C Cabbad. Originally published in JMIR Dermatology (http://derma.jmir.org), 8.7.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Dermatology, is properly cited. The complete bibliographic information, a link to the original publication on http://derma.jmir.org, as well as this copyright and license information must be included.