Interesting
Can AI Replace Your Doctor? Experts Say Not Yet
A new Israeli study shows that AI struggles with clinical judgment and decision-making
- Yitzchak Eitan
- פורסם י' אייר התשפ"ה

#VALUE!
As artificial intelligence becomes more common in the world of healthcare, a new Israeli study takes a closer look at what AI can and can’t do especially when real-life decisions are on the line.
The study was done by researchers from Bar-Ilan University, the University of Haifa, and Sheba Medical Center. They tested two advanced AI systems, ChatGPT and Google Gemini, against experienced physical therapists and students studying physiotherapy.
The test included 20 multiple-choice questions in the area of vestibular rehabilitation, which focuses on balance issues. The questions were divided into three categories: theoretical knowledge, basic application, and clinical reasoning.
According to the study, published in the Physical Therapy & Rehabilitation Journal, both AI systems scored perfectly on the theoretical questions. But when it came to understanding case studies and making decisions, what’s called clinical reasoning, they didn’t do as well. ChatGPT got 50% of those questions right, and Gemini only 25%. In comparison, the experienced physical therapists answered 76.5% correctly, and the students got 40.5%.
The researchers also looked at how well the AI explained its answers. ChatGPT gave clear and accurate explanations about half the time in the basic and theory sections. But for clinical reasoning, only one in four explanations were accurate and another 25% were completely wrong.
Yael Arbel, a physical therapist who worked on the study, told Maariv, “AI is great for making information easy to find and for summarizing guidelines, but it can’t replace the thinking process of real medical professionals.”
She added that it’s important to use technology wisely and with careful oversight and a good understanding of what it can’t do. By recognizing those limits, she said, we can use AI to help, not harm patient care.