Scientists develop an artificial intelligence program that detects cognitive impairment accurately and efficiently from voice recordings.
A lot of time—and money—is required to diagnose Alzheimer’s disease. After running lengthy in-person neuropsychological exams, clinicians have to transcribe, review, and analyze every response in detail. However, researchers at Boston University (BU) have developed a new tool that could automate the process and eventually allow it to move online. Their machine learning-powered computational model can detect cognitive impairment from audio recordings of neuropsychological tests, all with no in-person appointment needed. Their findings were published recently in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.
“This approach brings us one step closer to early intervention,” says Ioannis Paschalidis, a coauthor on the paper and a BU College of Engineering Distinguished Professor of Engineering. According to Paschalidis, faster and earlier detection of Alzheimer’s could drive larger clinical trials that focus on individuals in the early stages of the disease and potentially enable clinical interventions that slow cognitive decline: “It can form the basis of an online tool that could reach everyone and could increase the number of people who get screened early.”
The scientists trained their AI model using audio recordings of neuropsychological interviews from over 1,000 individuals in the Framingham Heart Study. This long-running BU-led project investigates cardiovascular disease and other physiological conditions. Their program transcribed the interviews, and then encode them into numbers using automated online speech recognition tools—think, “Hey, Google!”—and a machine learning technique called natural language processing that helps computers understand text. A final model was trained to evaluate the likelihood and severity of an individual’s cognitive impairment using a combination of demographic data, the text encodings, and real diagnoses from neurologists and neuropsychologists.
Not only was the model able to accurately distinguish between healthy individuals and those with dementia, but Paschalidis says it also detects differences between those with mild cognitive impairment and dementia. Surprisingly, it turned out that the quality of the recordings and how people spoke—whether their speech flowed smoothly or consistently faltered—were less important than the content of what they were saying.
“It surprised us that speech flow or other audio features are not that critical; you can automatically transcribe interviews reasonably well, and rely on text analysis through AI to assess cognitive impairment,” says Paschalidis, who’s also the new director of BU’s Rafik B. Hariri Institute for Computing and Computational Science & Engineering. Though the research team still needs to validate its findings against other sources of data, the results suggest their tool could support clinicians in diagnosing cognitive impairment using audio recordings, including those from virtual or telehealth appointments.
Screening before Symptom Onset
The model also provides insight into what parts of the neuropsychological exam might be more important than others in determining whether an individual has impaired cognition. The researchers’ model splits the exam transcripts into different sections based on the clinical tests performed. For example, they discovered that the Boston Naming Test—during which clinicians ask individuals to label a picture using one word—is most informative for an accurate dementia diagnosis. “This might enable clinicians to allocate resources in a way that allows them to do more screening, even before symptom onset,” says Paschalidis.
Early diagnosis of dementia is not only important for patients and their caregivers to be able to create an effective plan for treatment and support, but it’s also critical for scientists working on therapies to slow and prevent Alzheimer’s disease progression. “Our models can help clinicians assess patients in terms of their chances of cognitive decline,” says Paschalidis, “and then best tailor resources to them by doing further testing on those that have a higher likelihood of dementia.”
Want to Join the Research Effort?
The research team is looking for volunteers to take an online survey and submit an anonymous cognitive test—results will be used to provide personalized cognitive assessments and will also help the team refine their AI model.
Reference: “Automated detection of mild cognitive impairment and dementia from voice recordings: A natural language processing approach” by Samad Amini, Boran Hao, Lifu Zhang, Mengting Song, Aman Gupta, Cody Karjadi, Vijaya B. Kolachalama, Rhoda Au and Ioannis Ch. Paschalidis, 7 July 2022, Alzheimers Disease & Dementia.
Also contributing to this research were Samad Amini (ENG’24), Boran Hao (ENG’19,’24), and Lifu Zhang (CAS’22, ENG’22); Mengting Song, an ENG researcher; Aman Gupta (ENG’21), a BU Center for Information & Systems Engineering research assistant; Cody Karjadi (CAS’17, MET’20) of the Framingham Heart Study; Vijaya B. Kolachalama, a BU School of Medicine assistant professor; and Rhoda Au, a MED professor of anatomy and neurobiology. The work was supported by the National Science Foundation, Department of Energy, Office of Naval Research, National Institutes of Health, the Framingham Heart Study’s National Heart, Lung, and Blood Institute contract, National Institute on Aging, Alzheimer’s Association, Pfizer, Karen Toffler Charitable Trust, American Heart Association, and Boston University.
Funding: National Science Foundation, DOE/US Department of Energy, Office of Naval Research, NIH/National Institutes of Health, Framingham Heart Study, NIH/National Institute on Aging, Alzheimer’s Association, Pfizer, American Heart Association
Be the first to comment on "Using AI To Quickly Diagnose Alzheimer’s Disease and Dementia From Voice Recordings"