Hearing aids with artificial intelligence

By Therabeep team |

Hearing loss can be a frustrating and isolating condition. However, thanks to advances in technology, there are now more options than ever for managing hearing loss and improving hearing ability. One of the most promising developments in hearing health care is the emergence of hearing aids with artificial intelligence (AI).

AI is a powerful and promising technology that has already revolutionized many aspects of our daily lives. From smart assistants, to self-driving cars, AI offers a wide range of possibilities improving efficiency, safety, or accessibility across different areas. This is achieved by automatically processing information such as images, text, or sound, and generating responses as basic as yes or no labels, or as complex as speech.

Hearing aids are yet another example of devices benefiting from intelligent signal processing. AI hearing aids are designed to provide a personalized and seamless hearing experience, using algorithms to analyze and adapt to the user's individual hearing needs and preferences.

However, both hearing aids but especially artificial intelligence are surrounded by confusion and misunderstanding: How does artificial intelligence work? How can artificial intelligence improve my hearing? What advantages do these hearing aids really have? In this blog post, we will dive deep into the world of AI-powered hearing aids and explore the key features and benefits of these cutting-edge devices.

What is artificial intelligence?

Simply put, AI is the ability of machines to perform tasks that would normally require human intelligence, such as recognizing patterns, learning, and problem-solving. This is usually implemented in the form of algorithms, usually neural networks, that can generate predictions based on the users’ input. Artificial intelligence is usually trained on a large collection of data, and can use the information from this dataset to generate the predictions without being explicitly programmed to do so.

Years ago, machine learning algorithms were almost exclusively deployed on cloud environments, however, as processors have become faster and researchers have developed leaner algorithms, intelligent processing has been increasingly performed on the device that acquires the data. For hearing aids, this means that the intelligent enhancement is computed on the device itself, usually on the behind-the-ear piece, without requiring the hearing aids to be connected to a smartphone or to the cloud to do this processing.

A good example of AI that analyzes sound are speech recognition systems. These speech recognition systems are part of smart assistants such as Alexa or Siri, and use AI to transform the acoustic data into a digital representation of the users’ intent. Specifically, this involves a 2 step process where the system first converts the speech into text, and then the text is matched against a database of predefined intents.

To perform speech recognition, the system needs to be trained on a large dataset of spoken language. This training dataset usually consists of audio clips matched to their respective transcriptions and a label representing the users’ intent. The process of training will teach the system to recognize patterns in the audio signal and to understand the structure and rules of the language. Once the system has been trained, it can be used to interpret spoken language in real-time.

How does artificial intelligence work in hearing aids?

Traditionally, hearing aids used to be dumb devices that just amplified incoming audio. As the computing power of small processors stopped being a barrier, these devices started incorporating classical sound processing algorithms such as dynamic range compression or wiener filter-based noise reduction. Additionally, with the advent of modern AI, these devices have started incorporating intelligent signal processing capabilities. Just as speech recognition algorithms translate sound into a label, similar sound analysis algorithms started being used to label which interesting sounds the user would like to hear better.

One of the many ways in which artificial intelligence is used in hearing aids is acoustic environment identification. The AI hearing aids will monitor the surrounding sounds in real-time and identify different environments and situations. When a specific situation is identified, the hearing aids will update its settings to match the environment. For instance, if the AI hearing aids detect that someone is speaking, it will try to mask out background noise; on the other hand, when the device identifies that you are listening to music, it will try to provide a more balanced sound experience. In practice, this smart customization translates into activating a specific equalization preset, tuning the feedback control, enabling the noise reduction filters, or adjusting the gain of each directional microphone.

Even if hearing aids have started incorporating machine learning, the embedded algorithms are still far away from state of the art algorithms such as DALL-E or Chat-GPT. The on-board machine learning algorithms are usually decision based systems such as decision trees, or relatively shallow neural networks. Moreover, these algorithms are fed handcrafted features such as the average sound intensity across specific frequency ranges. This is a result of the reduced computing power of the hearing aids’ processor, and limits the analysis performance limiting the customization capabilities of the device.

The limited computing capabilities of hearing aids has not stopped manufacturers from experimenting with more powerful algorithms. As many bluetooth enabled hearing aids are connected to a smartphone, manufacturers have tried to offload the heavy computation to the smartphone processor. This both enables the use of more powerful algorithms such as deep neural networks and convolutional networks, and the use of the smartphone microphones as a secondary audio input.

One of the many improvements enabled by this approach is direct spectrogram analysis. A spectrogram is a graphical representation of the frequency spectrum of a signal over time where the horizontal axis represents time, the vertical axis represents frequency, and the color intensity represents sound intensity. This “image” can be fed to powerful algorithms such as convolutional networks in order to perform accurate acoustic environment classification. Additionally, this increased computational power opens the door to even more complex algorithms, such as intelligent denoising, where the AI algorithm will generate a new spectrogram in which the background noise has been greatly reduced. This new spectrogram can be turned back into sound, and fed to the hearing aids speaker. So, why aren’t all hearing aids using this smart denoising? Well, using AI to generate enhanced sounds introduces a significant delay that will range between a few tens of milliseconds, to a couple hundred milliseconds. This gap will be heard by the user as an echo effect, which makes speech comprehension difficult. Because of this, smart sound generation is only applied when the user suffers from a greater degree of hearing loss; since these patients will have a hard time hearing the original signal, they will mostly hear the post-processed sound and the delay won’t be a problem.

Looking towards the future, speech separation is a promising technology that has the potential to revolutionize hearing aids technology. Earlier in this article we explained that hearing aids incorporated noise reduction algorithms like the wiener filter. This noise reduction usually aims at eliminating noise that is long-lasting, predictable, or repetitive, such as the sound of an engine, an air conditioner unit, or a washing machine. However, speech separation aims at completely isolating speech from any other noise. Speech separation can remove a dog barking, background music, or the clatter of dishes in a dining room, producing clear speech that users can understand even without relying on amplification. Despite being a promising technology, speech separation can involve even a heavier computational load than intelligent denoising; because of this, this technology is hard to implement even when relying on the computational capabilities of smartphones.

Besides sound processing, some devices may include additional sensors aimed at health and activity monitoring. Since hearing aids tend to be used by older people, some devices may include fall detection algorithms that send alerts to friends or family members. Additionally these devices may include health data tracking sensors that monitor the heart rate, or count steps.

Which hearing aids have artificial intelligence?

Hearing aid manufacturers are increasingly including smart processing capabilities on their devices. While traditionally being dumb devices, hearing aids are increasingly adopting consumer technology such as bluetooth connectivity or smart audio processing. Some of the features discussed in this article are only in a test phase, and others are even yet to be implemented, however the new generation of AI-powered hearing aids is beginning to emerge as more and more devices include smart capabilities. Let’s look at some of the best examples.

Oticon More

One of the main features of Oticon More is the use of neural networks to identify real-life sound scenes. The neural network on the Oticon More was trained on 12 million sound scenes, with the goal of identifying the acoustic environment. With this information, the hearing aids are able to provide a balanced sound fine tuned to each situation.

Traditionally, hearing aids performed dumb rule-based sound scene identification, which limited the number of different scenes that could be reliably identified. However, the neural networks used by Oticon More are capable of identifying many more situations with a great level of reliability. Additionally, another strong point of this device is that the intelligent processing is performed on the device itself, providing enhanced sound with minimal lag.

According to their research, Oticon More delivers 30% more sound to the brain, and increases speech understanding by 15% when compared to some of their previous models. On the other hand, user reviews of the Oticon More have been positive too, showing an initial support for the intelligent processing.

Starkey Evolv AI

Smart environment classifiers can recognize hundreds of listening situations, and adjust the hearing aid settings accordingly. However, there will always be challenging situations that require extra effort in order to achieve clear-sounding audio. To this end Starkley has developed Edge Mode, an extra layer of smart processing that takes a snapshot of the acoustic environment and finds the best settings accordingly.

Contrary to automated environment classification, which can run several times each second, Edge Mode is triggered manually tapping the device. This way, when the user feels that the hearing aids are not working optimally, Edge Mode can find better settings for that specific environment.

Test results, performed on users suffering from severe-to-profound hearing loss, show that Edge Mode users show almost a 40% improvement in speech understanding over automatic environment classification users.

ReSound ONE

Many hearing aids feature one or more microphones placed behind the ear. This causes a problem, since the hearing aids wont pick up the acoustic information caused by the sounds resonating over the outer ear. In order to alleviate this problem ReSound ONE has placed the microphone and receiver inside the ear canal, which previously had not been possible due to the feedback generated by placing a speaker and a microphone close together.

Additionally, ReSound ONE features two standard directional microphones, and uses artificial intelligence to adjust the gain of each one. This is achieved using automatic environment classification, which identifies specific sound environments and gradually tunes the gain of the different microphones.

Widex Moment

While many hearing aids manufacturers have opted for creating a predefined set of optimized settings for each acoustic environment, Widex has gone a different way. The bad thing about many automatic environment detection systems is that users may not like the settings that the hearing aids select. Because of this, Wides has created the SoundSense Learn technology.

In practice, the SoundSense Learn system will prompt the user to select between two different sound profiles. Once selected, the hearing aids will collect an acoustic fingerprint and upload both the fingerprint and the selected sound profile to the cloud. With this information, Soundsense Learn will be able match acoustic environments to the preferences of hundreds of users on its database. This way, the sound profiles will be generated based on actual user preferences and not on guesswork from the hearing aids’ engineers.

If you are interested in exploring the potential benefits of AI-powered hearing aids, it is recommended to consult with an audiologist for guidance and recommendations. They can help you choose which hearing aids better suit your needs, and determine if AI technology would be beneficial for your specific case.