
Researchers found that distrust in AI causes people to give less detailed symptom reports, potentially reducing the accuracy of digital healthcare assessments.
Before seeing a doctor in the future, patients may first find themselves answering questions from an AI. Based on those responses, the system could decide whether the condition is urgent, whether treatment can wait, and even when an appointment should be scheduled.
That future may sound distant, but healthcare is already moving in that direction. AI chatbots and digital symptom checkers are rapidly becoming the first point of contact for patients performing “self-triage,” offering early guidance before a medical professional is ever involved.
Now, researchers are investigating a critical new question: do people communicate differently with machines than they do with doctors? The answer could have major implications, because even the most advanced AI systems can only make reliable assessments when patients provide detailed and accurate information.
Study Reveals Communication Gap With Medical AI
That issue is highlighted in a new study published in Nature Health. The research was led by Professor Wilfried Kunde of the University of Würzburg and research associate Moritz Reis. Scientists from Charité – Universitätsmedizin Berlin, the University of Cambridge, Helios Klinikum Emil von Behring, and Vivantes Klinikum Neukölln also contributed to the work.
“The 500 study participants were tasked with writing simulated symptom reports for two common conditions – unusual headaches and flu-like symptoms,” Moritz Reis explained. Participants were told their reports would be reviewed either by an AI chatbot or by a human doctor. Researchers then evaluated how useful the reports were for determining medical urgency.
The results showed a clear pattern. When participants believed they were communicating with AI, their symptom descriptions became less useful for medical assessment compared with reports intended for healthcare professionals. The same trend appeared even among participants who were actually experiencing the symptoms described in the survey.
Shorter Symptom Reports Hurt AI Accuracy
The difference was reflected in the amount of detail people provided. Reports written for medical professionals averaged 255.6 characters, while those written for chatbots averaged 228.7 characters.
Although a gap of 28 characters may appear minor, the researchers said it can still have real consequences. Even advanced AI systems can deliver inaccurate medical advice if patients leave out key details. According to the team, the effectiveness of digital health assessments depends not only on computing power but also on whether users provide thorough descriptions of their symptoms.
Researchers believe one reason for this hesitation is something known as “uniqueness neglect.” “Many people assume that AI cannot grasp the individual nuances of their personal situation and instead merely matches standardized patterns,” explains Wilfried Kunde.
Trust, Privacy, and “Uniqueness Neglect”
Concerns about privacy and skepticism toward algorithm-based diagnoses may also cause users to provide incomplete or vague information. Moritz Reis described the problem this way: “If we don’t trust a machine to understand our uniqueness, we may unconsciously withhold the information it would need to provide precise assistance.” As a result, important medical details may never reach the system, reducing the quality of the diagnosis.
The researchers say the findings demonstrate that improving AI technology alone will not solve the problem. They believe a better user interface design could help encourage stronger communication between patients and digital systems.
To improve symptom reporting, the team recommends that developers give users clear examples of detailed, high-quality descriptions and design AI systems that actively ask follow-up questions when information is missing. Encouraging patients to share more complete details could reduce misdiagnoses and help ease pressure on healthcare systems.
Reference: “Reduced symptom reporting quality during human–chatbot versus human–physician interactions” by Moritz Reis, Florian Reis, Yeun Joon Kim, Aylin Demir, Jess Lim, Matthias I. Gröschel, Sebastian D. Boie and Wilfried Kunde, 1 May 2026, Nature Health.
DOI: 10.1038/s44360-026-00116-y
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
1 Comment
thanks for this