Hearing Aid Research and Technology
Hearing aids are technological marvels, and their features are changing at an amazing pace. Much of the technology that makes today’s hearing aids so effective wasn’t even available just a few years ago. It’s a tough assignment, but we intend to keep abreast of all the developing hearing aid technology and report on it as it happens!
April 2013 – Listen Up To Smarter, Smaller Hearing Aids
March 2013 – Use your Android phone as a hearing aid remote
February 2013 – Digital Hearing Aids Vs. Analog Hearing Aids
February 2013 – A waterproof hearing aid
December 2012 – MIT Researchers Power Radio Chip from Ear’s “Natural Battery”
November 2012 – Software Improves Quality of Sound for Hearing Aid Users
November 2012 – Siemens Releases New Hearing Aid Technology
October 2012 – New Hearing Aid to Improve Hearing in Noise
April 2012 – New hearing-aid technology
February 2012 – The Effectiveness of Frequency Lowering Hearing Aids
January 2012 – Programming hi HealthInnovations’ Hearing Devices
December 2011 – Panasonic Expands on Hearing Instrument Lineup
December 2011 – Digital Wireless Hearing Aids, Part 4: Interference
November 2011 – Solar Ear CEO Named Social Entrepreneur of the Year
October 2011 – Understanding the Terms “Water Resistant” and “Waterproof”
September 2011 – Phone and TV Solutions for Better Hearing
August 2011 – Hearing aids running on methanol
April 2011 – New Hearing Aid Has Microphone in Ear Canal
March 2011 – Invisible Extended Wear Hearing Aids
March 2011 – Connectivity in 2011: Enhancing the Human Experience
December 2010 – When your hearing aid gets wet
November 2010 – Technology shows promise in reducing telecoil interference
November 2010 – Hearing Aids Must Keep Acoustic Clues Natural
November 2010 – Tuning in to a new hearing mechanism
November 2010 – Frequency Transposition: Training Is Only Half the Story
October 2010 – Some Comments on Hearing Aid Features
September 2010 – Can Spectral Enhancement Improve YOUR Hearing?
September 2010 – Six ways to improve listening to music through hearing aids
September 2010 – Programming hearing instruments to make live music more enjoyable
September 2010 – Enhancing music with virtual sound sources
August 2010 – A New Approach to Nonlinear Signal Processing
April 2010 – HIA and EHIMA Partner on HA Interference Study
March 2010 – Boomers Demanding More Technology in Hearing Aids
October 2009 – Programming hearing aids using speech rather than beeps!
August 2009 – New Hearing Aid Software Improves Speech Recognition
May 2009 – Environmentally Adaptive Hearing Aids
March 2009 – Multicore processor powers hearing aid
February 2009 – Audigence, Audina and UF Team to Improve Hearing Aid Performance
January 2009 – Bluetooth and Hearing Aids: Ready for prime time?
December 2008 – New discovery leading towards intelligent hearing aids
March 2007 – Binaural Processing in Hearing Aids?
August 2006 – The History of Hearing Aid Technology
July 2006 – ASU Pursues High Tech Hearing Loss Solutions
July 2004 – The previous article talked about enabling your hearing aids to be bluetooth compatible. This article talks about using bluetooth to build a much cheaper and more powerful hearing aid!
June 2004 – Directional microphones don’t work very well because the physical devices are just too small. It’s a law of physics – or at least it was until scientists studied an amazing little fly that defies physical laws!
July 2002 – Want to know the what one of the premiere experts on hearing loss and hearing aids thinks about current hearing aid research. Here’s a report on Mark Ross’ workshop at the 2002 SHHH Convention in Seattle.
Listen Up To Smarter, Smaller Hearing Aids
After losing so much of his hearing, Einhorn says, he wasn’t sure whether he would still be able to work as a composer. So he began investing in technology that could help him make the most of the hearing he had left. Einhorn bought a top-of-the-line digital hearing aid for his left ear. He also went on to buy devices that help him hear what he’s composing, talk on the phone, listen to live music, and carry on conversations in noisy restaurants. The solutions aren’t perfect, Einhorn says. But they’re pretty good. “I compose every day. I see my friends. I go to movies. I go to concerts. I do everything,” he says. All that would have been a lot harder if Einhorn had lost his hearing just a couple of decades ago. Back then, most hearing aids were still analog devices with severe limitations. But in the digital age, hearing aids and so-called assistive listening devices have become smaller, smarter and much more powerful. The history of digital hearing aids goes back to the 1980s, says Matthew Bakke, a scientist at Gallaudet University who also directs the government’s Rehabilitation Engineering Research Center on Hearing Enhancement. Bakke recalls helping to build a device that was “the size of a refrigerator.” But like all things digital, it shrank fast. By the 1990s, digital hearing aids had gotten small enough to wear behind the ear. And they’ve at least mitigated many of the problems that plagued earlier devices, Bakke says. Full Story
Use your Android phone as a hearing aid remote
Beltone, a leader in patient-focused hearing technology for over 70 years, has announced SmartRemote(tm)-the first-ever app that allows hearing aid wearers to use their Android phone as a “remote control” to discreetly adjust their hearing instruments. SmartRemote can be downloaded from Google Play at no cost. Beltone developed the SmartRemote app in response to what hearing aid wearers wanted most: a great listening experience in multiple environments, ease-of-use, and complete discretion. Beltone’s award-winning 2.4 GHz wireless streaming technology underpins the new SmartRemote app. By pairing their hearing aids with the new Direct Phone Link 2 and the SmartRemote app on their Android phone, the hearing instrument wearer can use their Android phone to privately adjust hearing aid volume in one or both ears, change listening programs to match their environment, mute background noise during a phone conversation, and more. Full Story
A waterproof hearing aid
A lot of hearing-impaired people would be more active if they weren’t afraid of damaging hearing aids that don’t like the humidity of gyms or the dousings of personal watercraft. In response, Siemens has introduced what it says is the first waterproof hearing aid, capable of working as deep as 3 feet under water. The Aquaris can also be connected to a Bluetooth remote, called the Minitek, that streams audio to the earpieces, so a person could listen to music from a Bluetooth music player when swimming, for instance. Or an accessory microphone can be worn by someone you need to pay close attention to in a noisy room. A survey by Siemens found that of 500 hearing aid owners, 17 percent restricted their activity to avoid damaging their hearing aids. That is particularly hard on groups like hearing-impaired children and people who work at jobs where there is dust or grime, like farmers or steel workers. Full Story
Cognitive Function, Speech Reception, and Hearing Aid Fitting
Effects of cognitive functions on a person’s speech understanding and hearing aid fitting have received a lot of attention in recent years. Akeroyd examined 20 studies on the association between cognitive functions and speech reception/recognition in background noise (Int J Aud 2008;47[Suppl 2]:S53), and the results, when combined with those of Humes and colleagues, indicate that working memory and the degree of hearing loss were the most effective predictors for an individual’s speech reception and recognition in noise. (J Speech Hear Res 1994;37:465; J Acoust Soc Am 2002;112[3 Pt 1]:1122; J Speech Lang Hear Res 2007;50:283.)
A person’s working memory capacity generally can be measured in three ways: Full Story
The Effects of Frequency-Lowering on Speech Understanding in Children
Clinicians are often faced with decisions about engaging frequency-lowering signal processing in their patients’ hearing aids. Frequency-lowering is designed to move high-frequency sounds to lower frequencies, where limited hearing and hearing aid bandwidth are less likely to reduce audibility. Increasing access to high-frequency sounds is particularly important for hearing-impaired children who may need high-frequency speech sound audibility to maximize speech understanding. (Arch Otolaryngol Head Neck Surg 2004;130:556.) The clinical decision-making process for frequency-lowering can be complicated. Four different approaches are currently available, and each manufacturer has a different philosophy about how to lower high-frequency sounds and whether it should be engaged at all times or only when selected by the patient. Widely varied outcomes across research studies, even with the same type of signal processing, add to the confusion. A commonly used and well researched type of frequency-lowering, nonlinear frequency compression (NFC) can help clinicians understand the factors that influence outcomes with this technology. Full Story
Hearing-aid hackers fine-tuning their own devices
If you are short-sighted, usually all it takes is a visit to an optician to get a pair of spectacles to help restore the world to sharp detail. But if you suffer hearing problems, visiting an audiologist just the once will probably not restore sounds to crisp clarity. The consequent frustration is driving some people with the appropriate expertise to hack into their own hearing aids to carry out DIY improvements. Brian Moore, professor of audiology at the University of Cambridge, explained: “It’s not the same as spectacles where you know you have the right prescription. With a hearing aid you can have an initial prescription but you will need to do some fine-tuning around that afterwards to satisfy the individual person.” He said the tuning process was frustrating because of the difficulty in making hearing aids work within different levels of noise. Full Story
New hearing-aid technology
Digital advances have made today’s hearing aids smaller, smarter and easier to use. And microchips, laser beams and even insects may help create a more crystal-clear experience in the future. Research at Arizona State University, Cornell University and other institutions is assisting in the push to improve hearing and reduce the cost just as millions of baby boomers and Gen Xers are expected to boost demand for the devices. The technology already has evolved dramatically from the ear trumpets of the 1860s. Engineers and medical professionals made significant, recent improvements in the quality of hearing aids and said they expect to see additional breakthroughs within the next year. One recent advance is the ability to identify and amplify desired sounds such as a human voice while muting background noise, said Jerry Ruzicka, president of Starkey Laboratories Inc., which makes hearing aids. Full Story
The Effectiveness of Frequency Lowering Hearing Aids
Not long ago, the author had a conversation with a sales rep from one of the major hearing aid manufacturers. He remarked that it was looking as if frequency-lowering technology (FLT) would become the “next big thing” in hearing aids, and I asked if the sales rep’s company had any plans to look at incorporating FLT in their hearing aids. His response was “Well…you know…the literature says it doesn’t work.” At first glance of the literature, this is not an unreasonable conclusion; however, when examined more closely, his statement becomes questionable. . . . Patients with precipitous high frequency hearing loss have presented challenges to hearing care professionals since hearing aids were first dispensed. Precipitous high frequency hearing loss can have serious effects on speech understanding. High frequency speech components, particularly voiceless consonants, may be inaudible. In these instances, attempts to make these sounds audible are usually fruitless due to the dearth of sensory cells in the region of the cochlea responsible for the coding of these sounds, or even the presence of cochlear dead regions. Conventional amplification schemes can be of limited or no utility with this kind of hearing loss. Full Story
Digital Wireless Hearing Aids, Part 4: Interference
With the wide array of high-tech gadgets in use today, it is inevitable that interference will pose challenges for hearing aids. Very often it can be difficult to determine the exact source of interference, but it is helpful if the patient can describe the environment where interference is noted. This article explains the various forms and routes of interference, and provides practical advice about mitigating interference problems in digital wireless hearing aids. Full Story
Understanding the Terms “Water Resistant” and “Waterproof”
Moisture and electronics do not make good companions. Damage from moisture is one of the leading reasons for hearing aid repair. Common problems are electrical shortages, condensation, and corrosion. Additionally, moisture can clog the air holes of the zinc air battery. This unwanted moisture can be from weather/humidity, the perspiration of the user, or accidental water incidents. Problems associated with moisture in hearing aids can be very frustrating for the patient, as often the hearing aid “dies” unexpectedly, with no quick remedy available. Over the years, there have been many attempts to solve the hearing aid moisture problems. Devices, such as protective wrappers or sleeves, dehumidifying kits, and special hearing aid dryers, have been introduced. More recently, special nanocoatings have been used that make the hearing aids water-resistant-a significant improvement over hearing aids of previous generations. But is water-resistant good enough? Full Story
ReSound Releases Dual-Microphone Wind Noise Reduction Technology
ReSound has announced that it has developed wind noise reduction technology called WindGuard. The dual-microphone signal processing technology will be available in September 2011 with the release of ReSound’s upgraded Aventa fitting software. ReSound’s Surround Sound Processor is one feature that is already designed to reduce wind noise. Since wind noise is predominantly a low-frequency sound, it is typically a greater problem for directional hearing aids. The Surround Sound Processor incorporates low-frequency sound inputs that are processed omnidirectionally. However, wind noise still remains an issue for some users. WindGuard acts as a second line of defense against wind noise in both directional and omnidirectional microphone modes. Full Story
New Hearing Aid Has Microphone in Ear Canal
It has long been known that, when the microphone is moved from the top of the ear (as in BTEs) to somewhere inside the auricle (as in ITEs), a high frequency boost occurs, as the auricle acts as a natural acoustic preamplifier-with potential benefits in directivity and localization. Unlike a traditional RIC, a new microphone and receiver in the canal (MaRiC) design by ExSilent incorporates a small canal-worn module that contains both the microphone and receiver, as well as an over-the-ear processing unit to take maximum advantage of the high frequency focusing ability of the auricle, as well as other attractive features provided by RIC devices. Full Story
ReSound iSolate(tm) Nanotech Reduces Moisture Related Repairs By 50%
ReSound, the technology leader in hearing solutions, has released results from a recent study into the iSolate(tm) nanotech protective coating for hearing instruments. In a review of 50,000 hearing aids sold, the iSolate(tm) nanotech protective coating was shown to decrease moisture and debris related repairs by 50% in the first six months. “The benefits of iSolate(tm) nanotech become more evident with time,” said Jennifer Groth, Global Audiology, ReSound. “We expect even better results at the 9 – 12 month mark.” Full Story
Speech-in-Noise Potential of Hearing Aids with Extended Bandwidth
This study shows that a hearing aid with an extended bandwidth may improve the wearer’s tolerance for noise in a noisy environment. However, to achieve this improvement, the prescriptive gain target needs to accommodate the added bandwidth of the hearing aid. Full Story
Hearing aids revolutionized by sound advances in technology
Digital advances have made today’s hearing aids smaller, smarter and easier to use. And microchips, laser beams and even insects may help create a more crystal-clear experience in the future. Research at Arizona State University, Cornell University and other institutions is assisting in the push to improve hearing and reduce the cost just as millions of baby boomers and Gen Xers are expected to boost demand for the devices. The technology already has evolved dramatically from the ear trumpets of the 1860s. Engineers and medical professionals have made significant, recent improvements in the quality of hearing aids and said they expect to see even more breakthroughs within the next year. Full Story
Connectivity in 2011: Enhancing the Human Experience
by Douglas L. Beck, AuD, and Marcus Holmberg, PhD
Three significant benefits previously available only through FM systems-including reduced background noise, reverberation, and high SNR-are attained by a new wireless remote microphone from Oticon. The ConnectLine Microphone transmits wireless signals from virtually any sound source (within about 40 feet) directly into the Streamer, which then sends the audio signal to two wireless-enabled Oticon hearing aids. Full Story
New algorithm automatically adjusts directional system for special situations
Directional-microphone technology has been used in hearing instruments since the late 1960s, and has been shown to improve speech understanding in background noise (e.g., see evidence-based review by Bentler). For many years, this technology was considered a “special feature” and was available only in select models. All this has changed in the last 15-20 years, and today manufacturers offer directional technology in most of their hearing instruments. In modern instruments, the directional effect usually is accomplished using two omnidirectional microphones, which Siemens introduced with its dual-directional microphones (“TwinMic”) in 1997. Research with this new technology produced encouraging findings. In 2002, Siemens was the first to add automatic-adaptive functionality to the polar patterns of directional microphones. It was “automatic” in that, based on the results of an analysis of the situation-detection system, the algorithm “automatically” switched from omnidirectional to directional or back to omnidirectional. It was “adaptive” in that the directivity was focused to the front, but the null of the polar pattern could be steered to correspond with the loudest sound from the rear hemisphere, which allowed for maximum attenuation of background noise in this general region. Or, if a diffuse noise field was detected, the adaptive algorithm would select the polar pattern that provided the best directivity. Full Story
Comparison of Wireless and Acoustic Hearing Aid-Based Telephone Listening Strategies
Objective: The purpose of this study was to examine speech recognition through hearing aids for seven telephone listening conditions.
Design: Speech recognition scores were measured for 20 participants in six wireless routing transmission conditions and one acoustic telephone condition. In the wireless conditions, the speech signal was delivered to both ears simultaneously (bilateral speech) or to one ear (unilateral speech). The effect of changing the noise level in the nontest ear during unilateral conditions was also examined. Participants were fitted with hearing aids using both nonoccluding and occluding dome ear tips. Participants were seated in a room with background noise present and speech was transmitted to the participants without additional noise.
Results: There was no effect of changing the noise level in the nontest ear and no difference between unilateral wireless routing and acoustic telephone listening. For wireless transmission, bilateral presentation resulted in significantly better speech recognition than unilateral presentation. Bilateral wireless conditions allowed for significantly better recognition than the acoustic telephone condition for participants fitted with occluding ear tips only.
Conclusion: Routing the signal to both hearing aids resulted in significantly better speech recognition than unilateral signal routing. Wireless signal routing was shown to be beneficial compared with acoustic telephone listening and in some conditions resulted in the best performance of all of the listening conditions evaluated. However, this advantage was only evident when the signal was routed to both ears and when hearing aid wearers were fitted with occluding domes. Therefore, it is expected that the benefits of this new wireless streaming technology over existing telephone coupling methods will be most evident clinically in hearing aid wearers who require more limited venting than is typically used in open canal fittings. Source and Order Report
Hearing Aids Must Keep Acoustic Clues Natural
The human auditory system, during the course of evolution, has become attuned to the multi-dimensional cues of speech, as well as sounds from the broader acoustic environment. To keep auditory perception as intact as possible, for as many people as possible, and for as long as possible, we optimize hearing aid signal processing to ensure audibility while maximizing these naturally occurring cues. In this context, keeping acoustic cues natural implies, among other things, the reproduction of sound with a high bandwidth, maintaining the information conveyed by onsets of words, syllables, and environmental sounds, and the detailed amplitude fluctuations that constitute sounds. Other examples relate to binaural cues, such as interaural time and level differences, head shadow, or better-ear effects. These are used when locating sound sources and segregating one source from another. It has been demonstrated that hearing aid signal processing, including some forms of wide dynamic range compression (WDRC), can greatly affect interaural level differences and better-ear effects. Therefore, hearing aid signal processing should be designed with this in mind. Full Story
How to Compare Feedback Suppression Algorithms in Open-Canal Fittings
by Jason A. Galster, PhD, and Elizabeth A. Galster, AuD
Professionals are fitting open-canal behind-the-ear (BTE) hearing aids to more patients than ever before. This is reflected in the growth of BTE hearing aid sales, which now account for 60% of all hearing aids sold in the United States. Patients, in turn, experience reduced occlusion and improved sound quality and comfort as a result of these open-canal fittings.
Advancements in feedback suppression algorithms have made many of these beneficial features possible. Despite the advancements, professionals have undoubtedly noticed considerable variability in the performance of feedback suppression systems across products and patients. The observed variability in the performance of a feedback suppression algorithm is expected due to several factors, including differences in manufacturers’ feedback suppression algorithms, patients’ pinna and ear canal geometries, venting effects, and prescribed gains.
In open-canal fittings, the ear canal acts as a large vent, increasing acoustic leakage and increasing the difficulty of managing feedback. In occluded fittings, feedback is typically restricted to a range of high-frequencies, most often between 3,000 Hz and 5,000 Hz. Compare this to the open-canal fitting configuration where the energy of the acoustic leakage flattens and some peaks have shifted downward in frequency. The management of increased acoustic leakage across a wider range of frequencies makes feedback suppression in open-canal fittings more complex and creates a challenging condition for feedback suppression algorithms.
Frequency Transposition: Training Is Only Half the Story
by Francis Kuk, PhD, and Denise Keenan, MA
It has been almost 5 years since Widex reintroduced frequency transposition as an approach to regain audibility of the high frequencies that are either unaidable or unreachable. Since the introduction of the Audibility Extender (AE), we have conducted several studies using adults and children as subjects to demonstrate its efficacy. In general, we have demonstrated that the use of AE with optimally selected settings, when paired with proper training and use of the device, yielded positive changes in the wearer’s identification of speech sounds, especially of voiceless and fricative sounds. Such benefits were seen in both quiet and noise conditions.
Results of our reported studies showed that significant improvements in consonant identification scores occurred after the subjects have worn the AE for 1-2 months. Speech identification scores with the AE during the initial fit, although improved, were not statistically different from the scores measured with the non-AE program. Full Story
Peak clipping revisited: Turning distortion to listener advantage
It is commonly assumed that audio processing should highly value the concept of fidelity. Fidelity, of course, refers to a premise that output should be “true to the input” and the “hi fi” industry has engaged in marketing wars over gradations of fidelity. Included in the standard specifications that proclaim “high fidelity” credentials are wide, smooth bandwidths and the lowest possible measures of non-linear distortion. Although the fidelity principle is routinely violated in hearing aid fittings by purposeful alterations to the frequency response pattern and the dynamic relations of soft and loud sounds, there persists a widespread avoidance of waveform peak clipping and the harmonic distortion that may result. Curiously, though, it has been understood for over 60 years that even severe peak clipping does very little to disrupt speech understanding. The extensive investigations by JCR Licklider and others at the Harvard Psycho-Acoustics Laboratory in the late 1940s showed quite clearly that even drastic (“infinite”) clipping caused negligible reductions in word recognition. Full Story
Six ways to improve listening to music through hearing aids
Because all speech needs to emanate from a vocal tract that is between 15 cm. (child) and 19 cm. (large adult) in length, it is no surprise that the long-term speech spectra are similar for a wide range of languages. All speech is generated by a soft-walled, moist set of tubes (oral and nasal cavities) and, although we have articulators (tongue, soft palate, lips) that can move, there are limitations to what we can generate. Byrne et al. studied the long-term speech spectra of a number of languages and (expectedly) found almost identical spectra. The only consistent difference they found was that males have more low-frequency emphasis than females, which is directly related to the lower fundamental frequencies of the male subjects. This consistency in the human vocal tract has allowed us to use aspects of the long-term speech spectrum in hearing aid fittings. Music, however, is quite different. Some forms have long-term spectra that are similar to the long-term speech spectrum and others bear little resemblance. Music can have significant low-frequency energy or none at all. It can have low- or high-frequency spectral emphasis. It can be very intense, and it can be very quiet. In short, the dynamic ranges and bandwidths of musical instruments can be, and typically are, much different and greater from those of speech. Full Story
Programming hearing instruments to make live music more enjoyable
While concentrating our clinical efforts on the perception of speech in many different environments, hearing healthcare providers may sometimes overlook other signals, such as music, that may be very meaningful to the patient. Because hearing instruments are designed to focus on speech, music lovers and musicians are often disappointed by the sound quality of music. Settings and electroacoustic characteristics of hearing instruments may be ideal for speech signals, but not for music. As a result, hearing instruments may react inappropriately when music is present, since there are many acoustic differences between speech and music. A hearing aid that has been optimized to handle music as an input should have both software and hardware differences from other instruments. Bernafon has developed Live Music Plus, a software program with a dedicated combination of features for live music processing, which is available in its Veras and Vérité 9 hearing instrument families. In this paper we will first review some of the differences between music and speech signals. We will then explore the four elements that make up Live Music Plus, and finally we will report on the reactions of some professional musicians who have tried hearing aids with this program. Full Story
Enhancing music with virtual sound sources
For many people, listening to music is an important part of life. Most often the music is recorded and played on a CD player, the radio, the television, an mp3 player, or a computer. Listening to music from such devices was long out of reach for hearing aid users. But recently, the development of devices, such as the Oticon Streamer, that can send music wirelessly to hearing aids enables people to enjoy listening to music directly in their hearing aids with a good signal-to-noise ratio. However, listening to music sent directly to hearing aids is not optimal. Specifically, the sound image appears to be inside the listener’s head. This is referred to as “in-the-head locatedness.”1 When the signal is the same at both ears (monophonic), the listener perceives it as being in the middle of his or her head. When the signal is stereophonic, the sound is perceived as being on a line between the ears. By changing the level of the signal in either ear, the sound can be moved between the ears. This is referred to as “lateralization of the sound image.” Full Story
A New Approach to Nonlinear Signal Processing
For the past two decades, amplification for patients with sensorineural hearing loss (SNHL) has been driven by the concept of wide dynamic range compression (WDRC). It has been known for many years that a core characteristic of SNHL is the reduction in dynamic range. The amount of “working space” within the auditory system (the range between threshold and the uncomfortable loudness level, or UCL) is typically smaller than the full range of speech signals that a person is likely to encounter throughout the course of the day. The WDRC approach was developed to take a full range of speech inputs-the softest parts of soft speech through the loudest parts of loud speech-and place them within the remaining dynamic range of the patient. Over the years, a variety of schemes have been developed to calculate the appropriate gain required for different input levels in order to achieve the goal of full audibility. Most of the attention in this effort has been paid to determining aspects such as how can audibility be maximized without having sound levels violate the patient’s loudness tolerance, what is the minimal amount of audibility required for understanding of an on-going signal, and which frequency regions should be prioritized, etc. One aspect that has received less attention is the timing parameters of a compression system. The basic concept of compression is that the gain applied to the signal is inversely proportional to the input level: when the input level goes up, the gain decreases; when the input level drops again, the gain goes back up. However, the response of compression systems is typically not instantaneous. Typical input signals, especially speech, are not of a uniform level. Therefore, “waiting periods”-commonly known as attack and release times-are often built into the response patterns of nonlinear circuitry. Full Story
Designing hearing aid signal processing to reduce demand on working memory
Imagine two scenarios. In the first, you’re a little late driving in an unfamiliar city (without satellite navigation), and you’re on your way to an important meeting. In addition to looking for street signs, you are struggling to read a map to help you find your way. The heavy traffic is disturbing you. You accidentally miss your exit and must determine a new route to your destination. You are frustrated, and it takes a lot of mental effort to complete the task. By the time you arrive, you’re exhausted.
Now, imagine a second scenario. You’re driving to work along the same familiar route you take daily. Traffic is flowing smoothly, and the trip is routine. While driving, you think about your weekend plans. Suddenly, you realize you’ve arrived at work. You’ve driven through the whole town without actually noticing how you were driving, and you arrive precisely on time while expending little mental effort.
Obviously, a drive through a city can vary significantly with regard to the amount of problem solving, precision, focus, conscious processing of new information, and memorization required, the amount of mental effort expended, and the amount of stress experienced. The first scenario represents a process that involves significant effort, problem solving, and mental resources. The second scenario involves over-learned driving patterns that made the drive automatic and effortless and required few mental resources.
The above examples are analogous to different listening situations. Some listening situations appear effortless, while others demand much greater effort to understand what is being said. We know hearing-impaired people expend more listening effort in demanding listening situations. Full Story
Evaluation of frequency compression and high-frequency directionality
Although the concept of frequency lowering has been around for at least four decades, it has recently seen a resurgence as a “hot topic” in amplification. In the past 2 years, it has been implemented in products from major hearing aid manufacturers, including Phonak. The goal of frequency lowering is to shift high-frequency sounds that cannot be adequately amplified by a hearing aid or used by the corresponding region of the cochlea to lower frequencies where the information can be better amplified or used. In particular, the feature is expected to assist in making available such important information as high-frequency speech sounds (e.g., /s/, /f/, /?/) and frequencies between 2000 and 5000 Hz that are uniquely shaped by the pinna, depending on their angle of origin, to assist with front-back (F-B) discrimination. Full Story
Solving the trade-off between speech understanding and listening comfort
When a technique has been around for some time it is usually assumed to be mature. This might not be true, however, in the case of wide dynamic range compression (WDRC). Compression is certainly seen as the suitable compensation for loudness recruitment, but at that point the agreement ends. In fact, Moore writes in an article on compression that the “controversy continues about…whether it should be fast acting or slow acting.”1 Likewise, Bor et al. say about multichannel compression that “the appropriate number of channels remains an unanswered question.” Such uncertainties suggest room for improvement. Indeed, improvements are necessary if hearing instruments are to increase user satisfaction. And improvements are also possible, as this article will show. Full Story
HIA and EHIMA Partner on HA Interference Study
EHIMA and HIA have completed a study on future Electromagnetic compatibility and Radio spectrum Matters (ERM); Hearing Instrument RF Interference Analysis. This study was the work of two experts within the RF field. Brian Copsey is from Europe and heads the ETSI working group. Stephen Berger, from the USA, is an expert on RF and other FCC matters. Our industry has committed considerable resources to protect our products from mobile phone interference over the past decade. This report is the start of a continued effort to create awareness of new interference risks to our products, and it will be updated on an on-going basis. With this surveillance activity, our industry can be better equipped to handle the next interference issue that arises. Full Story
Interpreting the efficacy of frequency-lowering algorithms
Despite a long history of research and commercial efforts,1 hearing aids with frequency-lowering algorithms have become popular only recently. Their lack of commercial success may be attributed in part to the immaturity of analog technology when these devices were introduced such that artifacts were plentiful. But insufficient training provided to the wearers of such devices, unrealistic expectations, and inadequate means to evaluate their efficacy are equally important contributors to the limited acceptance for this technology. Widex re-introduced the concept of linear frequency transposition in its Inteo hearing aid in 2006 under the name Audibility Extender.2 Since then, we have explored various avenues to better understand how such a feature can be fitted3,4 and its use facilitated.5 Just as important, we also studied (and developed) research tools that may be optimal for evaluating such an algorithm. Our effort led us to report on the efficacy of such an algorithm in a simulated hearing loss,6 in an open-tube fitting,7 in children,8 and in adults in quiet and in noise.9 We have learned that demonstrating the efficacy of a frequency-lowering algorithm is not a straightforward matter. We would like to share our experience in this paper. Full Story
Boomers Demanding More Technology in Hearing Aids
Six months ago, hearing-aid salesman Doug Gibson decided to start pitching a new product to his customers: high-tech hearing aids that connect wirelessly via Bluetooth technology to cell phones, iPods and televisions. He wondered whether anyone would buy them. Many of his customers are in their 70s or older, and some do not use cell phones, let alone hands-free sets or MP3 players. Gibson found what other retailers are beginning to see as a trend. Baby boomers just beginning to need hearing aids are gravitating toward ones equipped to handle their gadgets, or disguise the hearing aid as one of them. “They’re pretty techie people, and they all have Bluetooth in their cars. Most are in their 50s to early 70s,” Gibson says. “Soon I think we’re going to be seeing a lot more.” Aging boomers, because of their large numbers and willingness to pay for style and comfort, are a target market for manufacturers. Increasingly, that goes for medical devices, too. Full Story