AI created to translate babies’ cries © Getty Images

AI created to translate babies’ cries

Artificial intelligence could distinguish between normal cry signals and abnormal ones, such as those resulting from an underlying illness.

Babies can cry because they are feeling ill or pain but they will also often let out a whimper if they are feeling hungry or sleepy. This makes it incredibly difficult for parents, especially first-time parents, to know exactly why their little ones are snivelling. Now, a group of researchers based at Northern Illinois University in the States has created a method distinguishing between normal cry signals and abnormal ones, such as those resulting from an underlying illness, using artificial intelligence.

Advertisement

The method could be useful for both parents at home as well as by doctors that need use it to discern cries among sick children, they say.

Read more discoveries on BBC Science Focus:

While each baby’s cry is unique, they do share some common features. The team developed an algorithm based on an already existing automatic speech recognition system to detect and recognize the features of infant cries along with a technique called compressed sensing – a process that is able to reconstruct a signal based on very sparse data, especially in environments with high levels of background noise.

The algorithm analyses the waveforms of the infants’ cries looking for features in their loudness, pitch and timbre common to a database of recorded baby cries previously identified by experienced neonatal nurses and caregivers. For example, The “neh” sound is generally related to being “hungry”. Typically, when a baby has the sucking reflex and their tongue is pushed to the roof of the mouth, a “neh” sound is created. Similarly, the “eh” sound means that a baby needs to burp. Generally speaking, it happens after feeding.

“Like a special language, there are lots of health-related information in various cry sounds. The differences between sound signals actually carry the information. These differences are represented by different features of the cry signals. To recognize and leverage the information, we have to extract the features and then obtain the information in it,” said Prof Lichuan Liu.

The researchers hope that the method could be widened out to assist with other areas of medicine in which decision making relies heavily on experience.

“The ultimate goals are healthier babies and less pressure on parents and caregivers,” said Liu. “We are looking into collaborations with hospitals and medical research centres, to obtain more data and requirement scenario input, and hopefully we could have some products for clinical practice.”


Advertisement

Follow Science Focus on TwitterFacebook, Instagram and Flipboard