!Discover over 1,000 fresh articles every day

Get all the latest

نحن لا نرسل البريد العشوائي! اقرأ سياسة الخصوصية الخاصة بنا لمزيد من المعلومات.

Not Yet – ChatGPT Test Fails to Diagnose Children’s Medical Cases with an Error Rate of 83%

It turns out it is poor at recognizing relationships and needs selective training, researchers say.

Introduction

ChatGPT is still not like Dr. House.

While the modern intelligent robot has previously failed in its attempts to diagnose difficult medical conditions – with an accuracy rate of 39 percent in last year’s analysis – a new study published this week in JAMA Pediatrics indicates that the fourth version of the large language model is particularly bad at cases involving children. The accuracy rate was only 17 percent when diagnosing pediatric medical conditions.

Lost Relationships

In testing ChatGPT, researchers pasted text related to relevant medical cases into the prompt text, and then two qualified physician researchers evaluated the AI-generated responses as correct, incorrect, or “did not fully capture the diagnosis.” In the latter case, ChatGPT provided a relevant or broad clinical scenario that was not specific enough to be considered a correct diagnosis. For example, ChatGPT diagnosed a child’s condition as a result of a cyst in the branchial cleft – a mass in the neck or under the clavicle – when the correct diagnosis was branchio-oto-renal syndrome, a genetic condition that causes abnormal tissue development in the neck and malformations in the ears and kidneys. One of the symptoms of the condition is the formation of branchial cleft cysts.

Proposed Improvements

Although the chat robot struggled in this test, researchers suggest it can be improved through specific and selective training on accurate and reliable medical literature – not the inaccurate and misleading information found online. They also propose that chatbots be enhanced by accessing real-time medical data, allowing the models to improve their accuracy and refine themselves.

“This provides an opportunity for researchers to investigate whether training on specific medical data and refining it can improve the diagnostic accuracy of large language model-based chatbots,” the researchers conclude.

Source: https://arstechnica.com/science/2024/01/dont-use-chatgpt-to-diagnose-your-kids-illness-study-finds-83-error-rate/


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *