Comment on https://derma.jmir.org/2025/1/e60827
doi:10.2196/72540
Keywords
Juels Parker commented on our study comparing the sufficiency of ChatGPT, Google Bard, and Bing artificial intelligence (AI) in generating patient-facing responses to questions about five dermatological diagnoses [
, ]. He highlights an important need to compare AI to existing patient education tools, such as handouts, peer-reviewed articles, and patient-centered websites.We agree that AI is not a benign entity, and many resources exist for patients to learn about their conditions, aside from AI [
, ]. We also agree that AI cannot be deemed superior to existing materials without a comparative assessment. Yet, inherent differences between AI and existing materials inhibit such comparison in the context of our original study.Our pilot study compares AI chatbot responses to potential patient questions, with the primary goal of comparing the utility of three chatbots by assessing their strengths and weaknesses. As suggested by Parker, recommending the usage of AI in place of existing patient education materials would require a larger, more robust investigation that compares AI to existing resources. In our study, however, AI plays an inherently different role than traditional patient resources, such as paper handouts, disallowing comparative assessment. Generative AI offers users the flexibility to ask questions and receive direct answers, whereas traditional forms of patient education require patients to search for answers to their questions. By evaluating generative AI, our study simulates how patients might ask questions in the real world. As such, a comparison to existing patient resources was out of the scope of our study and would not have answered our research question—to evaluate the utility of chatbots to generate patient-facing responses. Additionally, patient education materials vary between practices, hindering the ability to conduct a comparative analysis with applicability real practice. While our conclusions suggest that AI may be used by patients to obtain information about their condition, we emphasize that this recommendation is to a limited extent and chatbots should not function as a first-line entity. Only approximately half of the responses in our study were considered sufficient for clinical practice, highlighting three domains in which chatbots require improvement—readability, removing inaccuracies, and improving specificity.
In conclusion, Parker highlights an important consideration regarding AI in dermatology–whether information gleaned from AI is superior to existing patient resources. However, in the context of our study, a comparative analysis between AI and existing resources would not have contributed to our goal of comparing chatbots. In the broader context of AI in dermatology, a study with a primary intention of comparing AI and existing materials for their clinical utility would provide novel insights into the future of AI in practice. Our pilot study is not sufficient to and does not confidently recommend patient usage of AI. Rather, our study serves as a basis for further examination of AI’s role in dermatology by illustrating the strengths and weaknesses of different chatbots. We appreciate the critical thought that Parker discussed about the implications of our work and the role of AI in dermatology.
Conflicts of Interest
None declared.
References
- Parker J. The importance of comparing new technologies (AI) to existing tools for patient education on common dermatologic conditions: a commentary. JMIR Dermatol. 2025;(8):e71768. [CrossRef]
- Chau CA, Feng H, Cobos G, Park J. The comparative sufficiency of ChatGPT, Google Bard, and Bing AI in answering diagnosis, treatment, and prognosis questions about common dermatological diagnoses. JMIR Dermatol. Jan 7, 2025;8:e60827. [CrossRef] [Medline]
- Tan S, Xin X, Wu D. ChatGPT in medicine: prospects and challenges: a review article. Int J Surg. Jun 1, 2024;110(6):3701-3706. [CrossRef] [Medline]
- Prabhu AV, Gupta R, Kim C, et al. Patient education materials in dermatology: addressing the health literacy needs of patients. JAMA Dermatol. Aug 1, 2016;152(8):946-947. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
Edited by Surya Nedunchezhiyan; This is a non–peer-reviewed article. submitted 11.02.25; accepted 14.02.25; published 01.04.25.
Copyright© Courtney Chau, Hao Feng, Gabriela Cobos, Joyce Park. Originally published in JMIR Dermatology (http://derma.jmir.org), 1.4.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Dermatology, is properly cited. The complete bibliographic information, a link to the original publication on http://derma.jmir.org, as well as this copyright and license information must be included.