Technology has a lot of importance in our lives. Without technological advancements, our lives would not have been simpler and faster. These days, technology researchers are using artificial intelligence to design devices that can determine whether the person is affected by coronavirus or not by just analyzing the sound of their cough, the way they speak or even breathe.
Coughing and sneezing were commonly the symptoms of the bubonic plague pandemic that destroyed Rome in the late sixth century. The origins of the kind phrase, “God bless you,” after a person coughs or sneezes, is often attached to Pope Gregory I, who believed that this prayer would offer protection from death. The flu-like symptoms associated with the plague co-occur during the current Covid-19 pandemic as well, to the extent where “normal” coughs signify immediate alarm and concern. However, in the present technologically advanced times, we need not resort only to prayers. We can now develop advanced AI models that learn multiple acoustic features to distinguish between cough sounds from Covid-19 positive and otherwise healthy patients.
|
An illustration of the sound signals taken from the dataset, obtained using a mobile phone. The top row presents the time domain waveforms, and the bottom row shows the corresponding time-frequency representations. These time-frequency representations are called spectrograms and indicate high (in red) and low (in blue) energy regions in the sound signal across each point in the time-frequency planes.
|
How does a person’s sound help in detecting COVID-19?
The European Research Council (ERC) article states that since COVID-19 is a respiratory disease, the sounds like heartbeats, sighs, breathing, and sneezing, are indicators of Covid-19 and, as such, a powerful source of medical information. AI researchers have collected cough sound data from the general public via mobile apps and websites and developed AI solutions for cough-based prescreening tools. The dataset includes samples from healthy and asymptomatic individuals, as well as COVID-19 patients. The dataset has two types of speech recordings: a complete sentence and a set of vowel sounds sustained for a few seconds, such as aaa or eee, to capture the finer details of the human voice box. Though a human ear cannot differentiate these features, AI models are trained in such a way that they can discriminate between a cough from a Covid-19 positive and negative patient.
Signal processing techniques such as filtering and voice activity detection, the recorded speech signal, which is in a digital format, are preprocessed to remove unwanted components and background noise. The preprocessed speech signal are applying feature extraction algorithms to extract traits that describe the speech signal. These characteristics are used as information to the AI algorithm to identify a pattern or intrinsic parameter associated with that pattern.
“Since the voice production mechanism is so complicated and dependent on cognitive abilities, any factor that affects your body or your mind will reflect in your voice. The changes can be in fractions of seconds -- what we call “micro” signature, that are not audible to the untrained listener,” said Rita Singh, a computer sciences research professor at the Carnegie Mellon University, whose team created the COVID Voice Detector.
Would you trust the results of a Covid-19 test based on the sounds of your voice or cough? Why or why not? Join the conversation below.
Excellent work 💯💯
ReplyDeleteThank you 😊
Delete🔥💯
ReplyDelete😇😇
DeletePre -eminent
ReplyDelete😇😊
Delete