Results might provide a convenient screening tool for people who may not suspect they are infected.
Asymptomatic people who are infected with COVID-19 exhibit, by definition, no discernible physical symptoms of the disease. They are thus less likely to seek out testing for the virus, and could unknowingly spread the infection to others.
But it seems those who are asymptomatic may not be entirely free of changes wrought by the virus. MIT researchers have now found that people who are asymptomatic may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.
In a paper published recently in the IEEE Journal of Engineering in Medicine and Biology, the team reports on an AI model that distinguishes asymptomatic people from healthy individuals through forced-cough recordings, which people voluntarily submitted through web browsers and devices such as cellphones and laptops.
The researchers trained the model on tens of thousands of samples of coughs, as well as spoken words. When they fed the model new cough recordings, it accurately identified 98.5 percent of coughs from people who were confirmed to have COVID-19, including 100 percent of coughs from asymptomatics — who reported they did not have symptoms but had tested positive for the virus.
The team is working on incorporating the model into a user-friendly app, which if FDA-approved and adopted on a large scale could potentially be a free, convenient, noninvasive prescreening tool to identify people who are likely to be asymptomatic for COVID-19. A user could log in daily, cough into their phone, and instantly get information on whether they might be infected and therefore should confirm with a formal test.
“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” says co-author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory.
Subirana’s co-authors are Jordi Laguarta and Ferran Hueto, of MIT’s Auto-ID Laboratory.
Prior to the pandemic’s onset, research groups already had been training algorithms on cellphone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. In similar fashion, the MIT team was developing AI models to analyze forced-cough recordings to see if they could detect signs of Alzheimer’s, a disease associated with not only memory decline but also neuromuscular degradation such as weakened vocal cords.
They first trained a general machine-learning algorithm, or neural network, known as ResNet50, to discriminate sounds associated with different degrees of vocal cord strength. Studies have shown that the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are. Subirana trained the neural network on an audiobook dataset with more than 1,000 hours of speech, to pick out the word “them” from other words like “the” and “then.”
The team trained a second neural network to distinguish emotional states evident in speech, because Alzheimer’s patients — and people with neurological decline more generally — have been shown to display certain sentiments such as frustration, or having a flat affect, more frequently than they express happiness or calm. The researchers developed a sentiment speech classifier model by training it on a large dataset of actors intonating emotional states, such as neutral, calm, happy, and sad.
The researchers then trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.
Finally, the team combined all three models, and overlaid an algorithm to detect muscular degradation. The algorithm does so by essentially simulating an audio mask, or layer of noise, and distinguishing strong coughs — those that can be heard over the noise — over weaker ones.
With their new AI framework, the team fed in audio recordings, including of Alzheimer’s patients, and found it could identify the Alzheimer’s samples better than existing models. The results showed that, together, vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation were effective biomarkers for diagnosing the disease.
When the coronavirus pandemic began to unfold, Subirana wondered whether their AI framework for Alzheimer’s might also work for diagnosing COVID-19, as there was growing evidence that infected patients experienced some similar neurological symptoms such as temporary neuromuscular impairment.
“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for COVID.”
“A striking similarity”
In April, the team set out to collect as many recordings of coughs as they could, including those from COVID-19 patients. They established a website where people can record a series of coughs, through a cellphone or other web-enabled device. Participants also fill out a survey of symptoms they are experiencing, whether or not they have COVID-19, and whether they were diagnosed through an official test, by a doctor’s assessment of their symptoms, or if they self-diagnosed. They also can note their gender, geographical location, and native language.
To date, the researchers have collected more than 70,000 recordings, each containing several coughs, amounting to some 200,000 forced-cough audio samples, which Subirana says is “the largest research cough dataset that we know of.” Around 2,500 recordings were submitted by people who were confirmed to have COVID-19, including those who were asymptomatic.
The team used the 2,500 COVID-associated recordings, along with 2,500 more recordings that they randomly selected from the collection to balance the dataset. They used 4,000 of these samples to train the AI model. The remaining 1,000 recordings were then fed into the model to see if it could accurately discern coughs from COVID patients versus healthy individuals.
Surprisingly, as the researchers write in their paper, their efforts have revealed “a striking similarity between Alzheimer’s and COVID discrimination.”
Without much tweaking within the AI framework originally meant for Alzheimer’s, they found it was able to pick up patterns in the four biomarkers — vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation — that are specific to COVID-19. The model identified 98.5 percent of coughs from people confirmed with COVID-19, and of those, it accurately detected all of the asymptomatic coughs.
“We think this shows that the way you produce sound, changes when you have COVID, even if you’re asymptomatic,” Subirana says.
The AI model, Subirana stresses, is not meant to diagnose symptomatic people, as far as whether their symptoms are due to COVID-19 or other conditions like flu or asthma. The tool’s strength lies in its ability to discern asymptomatic coughs from healthy coughs.
The team is working with a company to develop a free pre-screening app based on their AI model. They are also partnering with several hospitals around the world to collect a larger, more diverse set of cough recordings, which will help to train and strengthen the model’s accuracy.
As they propose in their paper, “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”
Ultimately, they envision that audio AI models like the one they’ve developed may be incorporated into smart speakers and other listening devices so that people can conveniently get an initial assessment of their disease risk, perhaps on a daily basis.
Reference: “COVID-19 Artificial Intelligence Diagnosis using only Cough Recordings” by Jordi Laguarta, Ferran Hueto and Brian Subirana, 30 September 2020, IEEE Journal of Engineering in Medicine and Biology.
This research was supported, in part, by Takeda Pharmaceutical Company Limited.
REBEL’S DAILY REPORTS
AI computers will drastically change our world in many ways. The medical profession is one segment that will completely change. AI computers will become the doctor’s assistant in a couple of years as we rid ourselves of human dominated analyzing medical needs of all kinds. AI computers will help us come up with new medicines and cures for our illnesses. All of you will just be on a health care plan that has different cost charges accordingly to what you want. AI computers will cut the cost of medical care drastically. AI computers will automatically show you how to live a healthy live, what to eat, analize everything you eat and everything you need to know to stay healthy. Your AI computers are going to become your best friend in many ways! They will completely change your world! AI computers will be able to diagnose your medical problems over the phone or your nearest pharmacy which also will be totally operated by AI computers. Doctors will get your medical analize from AI computers and after review will instruct AI to perform whatever needed medical procedure that AI can perform which will be many. AI computers will enlighten you and your doctors about any medical problems or concerns that need attention. You all are going to have a much better life with AI computers leading your path to a better life! I’m old and won’t see those days but you all have fun enjoying the new world AI computers will bring. AI computers will change the world like automobiles changed the world from relying on horses anymore for transportation. However in the process of initiating AI computers into our economy and lifestyle the AI computers will put millions of people out of work. This will cause great recessions, massive unemployment, massive bankruptcies, massive foreclosures, massive hunger and homelessness, massive stock market crash, massive unrest, massive lawlessness, massive disorganization, massive illness and massive deaths in capitalistic societies as newly unemployed search for whatever jobs are to be had. China and other socialist countries will have an easier way converting into the new world that AI computers will bring. Do you know in China that the government of China owns all the property in China! China will easily handle the amount of new unemployed people. America will probably fall into chaos as Americans struggle through what AI computers will do to our lives! Good Luck everyone! Wish I could be there to see it all happen but my time on this floating rock in space is nearing the end and I surely won’t see just what kind of America AI computers will make. But a bit of advice, stop arguing about having a socialist or capitalistic system or whatever other political smear word you may want to throw at each other and just figure out a way to survive because AI computers can replace a lot of workers which means that government revenues are going way down yet government responsibilities are going to be going way up! So the Chinese already got their country set up to handle AI computers. You in America better get your act together and work together to survive so that America doesn’t fall from the AI computers! Read https://www.facebook.com/ron.mccune.3 for more about this and more.