This article provides recent examples for use of Artificial Intelligence / Machine Learning in hearing aids, and explains their close ties with the method of Paired Comparisons.
To answer the question: who is currently the best tennis player in the world, is not easy. There are too many of them, and the criteria for the “quality” of a player are not clear. We even might have a subjective preference or dislike for a particular player. But still, such ranking exists, and it is not disputed. How is this possible?
The ranking can be obtained because the tennis players engage in paired comparisons with unmistakable outcomes. Every match produces a piece of evidence about the quality of both the winner and the looser. And every next match produces even more data, including the history of victories and defeats. This data is fed into a rating algorithm and the representative ranking for the world of tennis emerges.
In other words, the algorithm learns from the outcomes of paired comparisons. Machine learning experienced huge proliferation on commerce platforms such as Amazon. The algorithms are developed to learn about the preferences of individual customers. Knowing customer’s individual preferences and providing him with tasty buying suggestions is deciding for the success of such businesses. Research intensified.
Freedom of choice
The method of paired comparisons was often used in the fitting of hearing aids. Two are compared, and the better wins. In fact, for the long time that was the only way of fitting hearing aids. The hearing instruments were tested like shoes, comparing one product (size) after another and then buying the one that fitted best. In a process of trial-and-error, the physical sizes are associated with individual psychological satisfaction. This process is very obvious, but there is also a well-developed theory behind the comparative judgments nicely described in a paper by Amlani [Aml 2009].
Having in stock hearing aids in all “sizes” was not always possible, so the concept of “Master hearing aid” was developed. The master hearing aid was capable to mimic all other hearing aids and was helpful in obtaining the right “size” – so the actual, physical hearing aid can be ordered.
Technologically, the Master hearing aid mutated into a complicated, impractical system that was commercially abandoned and recently was sidelined to an “exempt” regulatory path by FDA.
Welcome to the Black box
With the advent of programmable hearing aids the concept of paired comparisons all but disappeared from the fitting procedure. Fitting was based predominantly on diagnostics and on the trust in deterministic fitting formulas (even though there are several very different formulas). The fine tuning required the expertise of professionals – the GUI of fitting software resembling a cockpit of a jumbo jet, a giant Master hearing aid.
Simply stated: the technology turned its back to the patient and developed according to the requirements of professionals. The professionals were in charge, the patient hoped for the best, paired comparisons seemed obsolete.
Today, the method of paired comparisons makes a glorious come-back.
The reasons for this resurgence are technological: the proliferation of mobile computing devices and advances in machine learning (artificial intelligence).
Hearing aids with a human face
Mobile devices, for example the smartphones, enable ongoing configuration of the signal processing parameters. Connected with a smartphone over a wireless link, such as Bluetooth Low Energy, hearing aid has now access to abounding computing power, diverse acoustical stimuli and any intelligence located on smartphone or on a remote server.
And maybe as the most important advantage, hearing aids acquired an interactive graphical user interface.
With this technological capital at hand, fitting by paired comparisons can now develop very far from the familiar concept of comparing shoe sizes. The smartphone acts as a smart Master hearing aid, speaking with the patient and collecting data about users preferences. Every interaction with the graphical user interface is used as a piece of evidence, stored and processed immediately – developing gradually into the final picture of the optimal signal processing.
The Machine learning in the background of Paired comparisons is in most applications based on Bayesian inference. This method starts with a hypothesis about the preference of the user and then refines this assumption as more evidence becomes available. Every response of the user makes the hypothesis more confident. The inference process refines not only the knowledge about the parameter under scrutiny, such as amplification magnitude, but also refines the process itself. Successful fine-tuning of 4 to 8 parameters of WDRC can be performed using 20 to 30 paired comparisons. A procedure of only a few minutes.
Introduction of patient-facing apps for hearing aids marks important paradigm change: users are for the first time given the ownership in the fitting which improves self-confidence and satisfaction with the device. With the participation in the fitting of his own device, the user becomes an active player in his own acoustical world.
Machine learning applications in hearing aids
The manufacturers of hearing aid adopted many different uses for machine learning. Almost all of those approaches rely on simple but reliable method of paired comparisons. Here are some of them:
Learning hearing aids are available for several years now. Observing the responses of the user, for example position of volume control or the statistics of usage of pre-programmed settings, the hearing aid slowly adjusts itself into the direction of user’s preference. This approach is very useful during acclimatization phase.
Same as above, only with the acoustical environment taken into account. The hearing aid can learn to adjust itself as soon as acoustical conditions change. In doing so, the hearing aid anticipates the changes that user would apply through user interface. Some of the manufacturers use a GPS positioning for prediction of users intentions. An example is described in [Gra 2015].
Verification of signal processing algorithms
For some acoustical features of hearing aids it is difficult to find an objective measure of quality. A noise reduction algorithms can remove too little or too much of background rattle. Presenting a variety of different stimuli to a large number of users can find the statistically relevant optimum. This approach is mostly utilized in laboratory conditions during development of hearing aids. An example is described in [Gro 2010].
(Artificially) intelligent fine-tuning
Fine tuning of hearing aids relies on the response from the user and the most reliable response is provided by paired comparisons. Active learning schemes such as proposed in this paper [Nie 2015] provide quick learning success and can reduce the number of iterations in the fine-tuning process.
User-driven fitting and fine-tuning of hearing aids
If acoustical stimuli for paired comparisons are augmented with an audio-visual interface (as in [Zuk 2010]), the reliability of the paired comparison outcome increases. The cognitive effort required for paired comparisons is significantly reduced, since the demand on “working memory” for memorizing the “quality” of acoustical stimuli is distributed between acoustic and visual modi.
This approach, where audio-visual data is provided in-situ – and subsequent machine learning based on Paired comparisons is suitable for OTC products, for user-driven fitting of mild to moderate hearing losses, and for fine-tuning of any hearing aid.
Development of new signal processing schemes based on user preference
An innovative approach by de Laar and de Vries [Laa 2016] makes another step into future of hearing aids – not only the fitting of parameters is obtained by artificial intelligence; now the signal processing itself should be “invented” by artificial intelligence. In this method, named In-situ personalized hearing aid design, the user provides his preference for the sound of presented algorithm, triggering so the next iteration in the design of his custom-made signal processing algorithm.
This list ist not complete and there is no doubt that many new applications of machine learning will appear in the hearing aid products in the next few years.
[Aml 2009] Amlani AM1, Schafer EC. Application of paired-comparison methods to hearing aids. Trends Amplif. 2009 Dec;13(4):241-59.
[Gra 2015] US-patent US9723415B2, Performance based in situ optimization of hearing aids, USPTO 2010
[Gro 2010] Perry Groot, Tom Heskes, Tjeerd M.H. Dijkstra, and James M. Kates Predicting Preference Judgments of Individual Normal and Hearing-Impaired Listeners With Gaussian Processes, IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2010
[Laa 2016] Thijs van de Laar, Bert de Vries, A Probabilistic Modeling Approach to Hearing Loss Compensation, IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2016
[Vor 1988] US-patent US4759070A, Patient controlled master hearing aid, USPTO 1988
[Nie 2015] Jens Brehm Bagger Nielsen, Jakob Nielsen, Jan Larsen, Perception-based Personalization of Hearing aids using Gaussian Processes and Active Learning, IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 2015
[Zuk 2010] US-patent US8494196B2 System and method for configuring a hearing device, USPTO 2010
Illustrations from US-patent applications, downloaded from USPTO.