Introduction: The purpose of this study was to evaluate three chatbots – OpenAI ChatGPT, Microsoft Bing Chat (currently Copilot), and Google Bard (currently Gemini) – in terms of their responses to a defined set of audiological questions. Methods: Each chatbot was presented with the same 10 questions. The authors rated the responses on a Likert scale ranging from 1 to 5. Additional features, such as the number of inaccuracies or errors and the provision of references, were also examined. Results: Most responses given by all three chatbots were rated as satisfactory or better. However, all chatbots generated at least a few errors or inaccuracies. ChatGPT achieved the highest overall score, while Bard was the worst. Bard was also the only chatbot unable to provide a response to one of the questions. ChatGPT was the only chatbot that did not provide information about its sources. Conclusions: Chatbots are an intriguing tool that can be used to access basic information in a specialized area like audiology. Nevertheless, one needs to be careful, as correct information is not infrequently mixed in with errors that are hard to pick up unless the user is well versed in the field.

1.
Haenlein
M
,
Kaplan
A
.
A brief history of artificial intelligence: on the past, present, and future of artificial intelligence
.
Calif Manag Rev
.
2019
;
61
(
4
):
5
14
.
2.
Adamopoulou
E
,
Moussiades
L
.
Chatbots: history, technology, and applications
.
Machine Learn Appl
.
2020
;
2
:
100006
.
3.
Jeon
J
,
Lee
S
.
Large language models in education: a focus on the complementary relationship between human teachers and ChatGPT
.
Educ Inf Technol
.
2023
;
28
(
12
):
15873
92
.
4.
Trust
T
,
Whalen
J
,
Mouza
C
.
ChatGPT: challenges, opportunities, and implications for teacher education
.
Contemp Issues Technol Teach Educ
.
2023
;
23
(
1
):
1
23
.
5.
Vaswani
A
,
Shazeer
N
,
Parmar
N
,
Uszkoreit
J
,
Jones
L
,
Gomez
AN
,
Polosukhin
I
.
Attention is all you need
.
Adv Neural Inf Process Syst
.
2017
;
30
.
6.
Lo
CK
.
What is the impact of ChatGPT on education? A rapid review of the literature
.
Education Sci
.
2023
;
13
(
4
):
410
.
7.
Geerling
W
,
Mateer
GD
,
Wooten
J
,
Damodaran
N
.
Is ChatGPT smarter than a student in principles of economics
.
SSRN Electronic Journal
.
2023
.
8.
Buchberger
B
.
Is ChatGPT smarter than master’s applicants
.
RISC Report Series
.
2023
:
23
04
.
9.
Frieder
S
,
Pinchetti
L
,
Griffiths
RR
,
Salvatori
T
,
Lukasiewicz
T
,
Petersen
PC
, et al
.
Mathematical capabilities of chatgpt
.
arXiv preprint
.
2023
.
10.
Skalidis
I
,
Cagnina
A
,
Luangphiphat
W
,
Mahendiran
T
,
Muller
O
,
Abbe
E
, et al
.
ChatGPT takes on the European Exam in Core Cardiology: an artificial intelligence success story
.
Eur Heart J Digit Health
.
2023
;
4
(
3
):
279
81
. Erratum in: Eur Heart J Digit Health. 2023 May 17;4(4):357.
11.
Luykx
JJ
,
Gerritse
F
,
Habets
PC
,
Vinkers
CH
.
The performance of ChatGPT in generating answers to clinical questions in psychiatry: a two-layer assessment
.
World psychiatry
.
2023
;
22
(
3
):
479
80
.
12.
Swanepoel
DW
,
Manchaiah
V
,
Wasmann
JWA
.
The rise of AI Chatbots in hearing health care
.
Hear J
.
2023
;
76
(
04
):
26,30,32
.
13.
Jedrzejczak
WW
,
Skarzynski
PH
,
Raj-Koziak
D
,
Sanfins
MD
,
Hatzopoulos
S
,
Kochanek
K
.
ChatGPT for Tinnitus Information and Support: Response Accuracy and Retest after Three and Six Months
.
Brain Sciences
.
2024
;
14
(
5
):
465
.
14.
Nielsen
JP
,
von Buchwald
C
,
Grønhøj
C
.
Validity of the large language model ChatGPT (GPT4) as a patient information source in otolaryngology by a variety of doctors in a tertiary otorhinolaryngology department
.
Acta Otolaryngol
.
2023
;
143
(
9
):
779
82
.
15.
Topsakal
O
,
Akinci
TC
,
Celikoyar
M
.
Evaluating patient and otolaryngologist dialogues generated by ChatGPT, are they adequate?
2023
.
16.
Dao
XQ
,
Le
NB
.
Chatgpt is good but bing chat is better for Vietnamese students
.
arXiv preprint arXiv:2307.08272
.
2023
.
17.
Deiana
G
,
Dettori
M
,
Arghittu
A
,
Azara
A
,
Gabutti
G
,
Castiglia
P
.
Artificial intelligence and public health: evaluating ChatGPT responses to vaccination myths and misconceptions
.
Vaccines
.
2023
;
11
(
7
):
1217
.
18.
Patil
NS
,
Huang
RS
,
van der Pol
CB
,
Larocque
N
.
Comparative performance of ChatGPT and bard in a text-based radiology knowledge assessment
.
Can Assoc Radiol J
.
2023
;
75
(
2
):
344
50
.
19.
Seth
I
,
Lim
B
,
Xie
Y
,
Cevik
J
,
Rozen
WM
,
Ross
RJ
, et al
.
Comparing the efficacy of large language models ChatGPT, bard, and bing AI in providing information on rhinoplasty: an observational study
.
Aesthet Surg J Open Forum
.
2023
:
5
:
ojad084
.
20.
AlAfnan
MA
,
MohdZuki
SF
.
Do artificial intelligence chatbots have a writing style? An investigation into the stylistic features of ChatGPT-4
.
J Artif intelligence Technol
.
2023
;
3
(
3
):
85
94
.
21.
Guo
B
,
Zhang
X
,
Wang
Z
,
Jiang
M
,
Nie
J
,
Ding
Y
,
Wu
Y
.
How close is chatgpt to human experts? comparison corpus
. arXiv preprint arXiv:2301.07597;
2023
.
22.
Coskun
BN
,
Yagiz
B
,
Ocakoglu
G
,
Dalkilic
E
,
Pehlivan
Y
.
Assessing the accuracy and completeness of artificial intelligence language models in providing information on methotrexate use
.
Rheumatol Int
.
2024
;
44
(
3
):
509
15
.
23.
Al-Ashwal
FY
,
Zawiah
M
,
Gharaibeh
L
,
Abu-Farha
R
,
Bitar
AN
.
Evaluating the sensitivity, specificity, and accuracy of ChatGPT-3.5, ChatGPT-4, bing AI, and bard against conventional drug-drug interactions clinical tools
.
Drug Healthc Patient Saf
.
2023
;
15
:
137
47
.
24.
Guillaume cabanac, comment on PubPeer. Available from: https://pubpeer.com/publications/83DCF77815DC61C4ED6DCD88847EC4#1 (accessed on 11.10.2023)
25.
You do not currently have access to this content.