An essential task for the central auditory pathways is to parse the auditory messages sent by the two cochleae into auditory objects, the segregation and localisation of which constitute an important means of separating target signals from noise and competing sources. When hearing losses are too asymmetric, the patients face a situation in which the monaural exploitation of sound messages significantly lessens their performance compared to what it should be in a binaural situation. Rehabilitation procedures must aim at restoring as many binaural advantages as possible. These advantages encompass binaural redundancy, head shadow effect and binaural release from masking, the principles and requirements of which make up the topic of this short review. Notwithstanding the complete understanding of their neuronal mechanisms, empirical data show that binaural advantages can be restored even in situations in which faultless symmetry is inaccessible.

Similar to stereoscopy, stereophony is based on combining information in the brain from the two ears, creating a robust illusion that confers the stimulus a special character of perspective known as three-dimensional (3D) depth and localisation. Both in the visual and auditory modalities, this character contributes to creating ‘objects', which are easier to segregate and identify than what would have happened if a single receiver had been available. For example, sound coming from a source on the right side of a subject reaches the left ear later than the right ear, as it has to travel further, and it reaches it with a lower intensity, as it has experienced the head shadow effect. The further to the right the source is located, the larger the interaural differences. If the two ears process sound in such a way that interaural differences are accurately encoded in the volleys of auditory-nerve action potentials in response to the sound, two complementary tasks are left to the brain: to detect that particular patterns of action potentials from the right and left side correlate so well that they can be ascribed to a single, definite object, and from the asymmetries between the two correlated inputs, to localise this object in some horizontal direction. These tasks are strongly frequency dependent, because the head shadow effect hardly affects low frequencies that experience diffraction by the head; conversely, timing differences are less efficiently encoded at high frequencies because the auditory neurons cannot encode the fine structure of high-frequency stimuli, but only their envelopes. The duplex theory of Lord Rayleigh [1907] was brought forth to account for the need for the brain to switch from timing to intensity cues, i.e. from interaural time differences (ITD) to interaural level differences (ILD) to locate sound sources with increasing frequencies.

Thus, when submitted to this analysis, competing activities will be identified as separate objects located in different directions to which it will be much easier to pay (or not pay) attention than it would have been with a monaural input providing a flat, 2D sound landscape. The evolutionary advantage of this ability to detect a predator stealthily approaching its prey while the latter drinks near a noisy waterfall is so huge that the brain has developed and maintained impressively refined stereophonic systems. Despite their likely high genetic cost, and although they can be based on quite different blueprints (e.g. birds vs. mammals [Grothe et al., 2010]), they are ubiquitous in all binaurally equipped species. For humans, survival may not be at stake, but in the presence of background noise, as happens in a place where several talkers interact, understanding speech requires the ability to locate and follow each talker. It is a notorious complaint of hearing-impaired people with asymmetric hearing loss that they are unable to do so, even with normal hearing in one ear and, as a result, they experience a considerable social - and often also professional - disability.

Audition, including its stereophonic abilities, is a complex process in which peripheral analysis in the cochlea is only a first stage. One important function in the brain, to combine and compare the raw information from the two cochleae, occurs in different brainstem nuclei, notably in the olivary complex, which exploit the intensity, timing and frequency aspects of what the cochleae have encoded. From their (hopefully consistent) outputs, a 3D landscape is built. In the case of asymmetric hearing loss, some aspects are so degraded in the poorer ear relative to the better one that comparison between the two ears may become impossible. It must be considered that patients with asymmetric hearing loss treated with a cochlear implant (CI) might perform as though they were monaural since, to make sense of widely asymmetric information from two differently impaired and differently rehabilitated receivers, the brain may face an impossible computational challenge. A compounding factor is that before receiving rehabilitation, it is now well documented that impairments that create disparity between the two ears, even if slight and of short duration, such as episodes of unilateral conductive hearing loss, trigger plastic changes in neuronal wiring and in brain processing, with these changes being more intense during some critical periods. The purported amblyaudia [Keating and King, 2013], as referenced to amblyopia in the visual world, is the most extreme consequence, and the reversibility of this plasticity is an important matter of concern. There would be no sense in trying to rehabilitate the poorer ear of an amblyaudic patient by just sending crude and distorted cues into non-existing circuits.

Yet, over the last few decades, experimental evidence has provided a strong incentive for bilateral fitting of hearing aids and/or CI even in difficult cases, if only to make sure that the patient would not become deaf in case of CI failure on the unilaterally implanted side, and if only to be sure that the better ear (which may not be predictable) has been implanted. Furthermore, experimental evidence of sometimes modest, but likely genuine, binaural benefits has been brought forward. This is a remarkable development as only 15 years ago it was feared that patients using both a CI on one side and a conventional hearing aid on the other might be at a disadvantage. In those days, patients were not always encouraged to keep using their contralateral hearing aid. This suggests that deprivation plasticity may be reversible after all, and that the physiology of sound source analysis and localisation may be less demanding than initially believed. As happens in the visual modality, the brain makes use of many different cues to determine the 3D characteristics of an auditory landscape. Their complete combination is required for stereophony to be achieved, but access to only some bilateral cues may still generate substantial benefits. It is, thus, important to review which cues may still be exploited despite imbalanced inputs from the two ears to the brainstem nuclei in charge of binaural processing, what requirements they need to function and how to clinically check for their existence. This is the goal of this short review article.

A first, basic consequence is the fact that each of the two ears substantially contributes to the action potentials that reach the brainstem, referred to as binaural loudness summation (a particular case of the more general notion of binaural redundancy). Very roughly speaking, the feeling of loudness generated by a sound relates to the number of action potentials triggered by the sound and integrated in (as yet unknown) brain centres. In a normal-hearing subject, this number doubles when the two ears are used instead of one ear for a sound coming from the front of the listener. For sounds at least 30 dB above their detection threshold, loudness (expressed in sones) undergoes a twofold increase. To obtain the same increase in loudness with a single ear would require the sound level to be increased by about 10 dB [Fletcher and Munson, 1933]. This increment decreases to 3 dB for weak sounds.

Not only do signals sound louder when subjects listen with both ears than with one ear, but the treatment of information is more sensitive to small differences, which when occurring unilaterally would be more difficult to separate from chance events. Thus, the just noticeable differences in intensity and frequency improve with signal redundancy and, thus, bilateral presentation. Likewise, recognition is improved in the presence of noise. By the modification it induces on loudness growth, hearing impairment may lead to a slightly weaker binaural benefit in patients [for a review, see Dillon, 2001]. It is also observed that with binaural stimulation, sounds can be louder than with a monaural presentation without causing discomfort. This holds even true for CI-treated patients who may be very sensitive to loudness-induced discomfort when electric stimulations increase monaurally, while they will stay comfortable, despite a twofold increase in loudness, as long as electric stimuli in each implanted ear stay below the tolerated limit.

Another benefit of binaural stimulation rests in the presence of spatial cues from which the localisation of a sound source in the horizontal plane is accurately determined. A first cue is given by the fact that the acoustic waveform of a sound arrives slightly earlier in the ear nearer the source. The resulting ITD systematically relates to the angular direction of the source. At frequencies below 2-3 kHz in humans, the auditory neurons partially phase lock to the fine structure of the sound (below 1 kHz) or to its envelope when the sound is not a pure tone, so that ITD information is preserved when action potentials reach the medial nucleus of the superior olivary complex.

In humans, ITD range from 0 to 700 μs, and ITD of the order of 10 μs can be discerned. The accuracy of localisation achieved with the analysis of ITD is a few degrees, depending on whether the sound source is in front of the subject or sideways. The neurons of the medial superior olive act as binaural coincidence detectors that discharge when the inputs from both sides are simultaneous. The delay line model of Jeffress [1948] assumes a hard-wired neural structure that may compute such coincidences from which localisation is extracted, but this model, possibly valid for birds, fails in mammals [Grothe et al., 2010]. It would be useful to elucidate it, in order to better predict the minimal set of bilateral information necessary for achieving localisation when this information is distorted. In CI patients, one expects electric stimulation to respect timing cues, which should be accurate if the auditory neurons are not functionally impaired.

While the ITD cues cannot be used at high frequencies, the head shadow effect increases with frequency. Below 200 Hz, ILD hardly reach a few decibels and show little dependence on source direction, although it has been shown that the ILD increases substantially for lateral sources as distance decreases below 1 m even at low frequencies [Brungart and Rabinowitz, 1999] while at 1 kHz, for example, ILD varies between 5 and 10 dB as a function of source direction [Shaw, 1974]. Above 1 kHz, ILD and their directional dependency continue to increase with frequency. Their systematic relation to the angular direction of the source for a given frequency leads to source localisation, as they can be interpreted in terms of source laterality provided they exceed the just noticeable difference in intensity - a fraction of 1 dB according to Weber's law. The ILD processing occurs in the lateral superior olive. Being excited by ipsilateral inputs and inhibited by contralateral ones, neurons of the lateral superior olive are very sensitive to the balance of levels from either side, and thus to ILD.

Source localisation in the vertical plane is a more difficult challenge. The main cues are provided by diffraction patterns by the head and pinna, which result in characteristic spectral dips [Grothe et al., 2010]. The frequency at which a spectral dip occurs relates to the angular vertical position of the source, so that the changes in spectrum of a broadband source inform about vertical displacements. Spectrally poor sources localise difficultly even by normally hearing subjects. Furthermore, typical hearing-impaired subjects experience an early loss in sensitivity at high frequencies, with enlarged cochlear filters, which irretrievably destroys their ability to identify vertical source position.

In a reverberating room, the first arriving wave front directly coming from a sound source does not arrive alone at the listener's ears, but it is accompanied by later, but sometimes louder, echoes reflected at the walls. Thanks to the precedence effect, however, a single auditory event is perceived, and only the wave front that is arriving first serves to determine source localisation, while echoes arriving between 2 and 50 ms later, even when 10 dB louder, only affect loudness, timbre and degree of spatial width. This echo cancellation may have to do with the long-lasting inhibition of neural responses after stimulus onset observed in the dorsal nucleus of the lateral lemniscus [Kidd and Kelly, 1996]. As the precedence effect relies upon binaural processing, asymmetric hearing loss is expected to induce a breakdown of echo suppression and of the ability to perform fusion, which is the process allowing, in reflective environments, a single sound to be perceived rather than a sound with multiple echoes. On the other hand, hearing impairment and ageing negatively affect the precedence effect [Akeroyd and Guy, 2011] so that even with binaural hearing aids a patient may still experience difficulties in the presence of reverberation.

The simple presence of a head in a sound field creates a diffraction pattern that leads to different signal-to-noise ratios in the two ears, whenever two competing sources, one emitting a signal and the other one noise, are placed in different directions. The signal-to-noise ratio at the ear furthest from the noise is increased as the head attenuates the noise, while this ratio decreases at the ear nearest to the noise source. As a result, a difference of more than 15 dB in the signal-to-noise ratio (averaged across frequencies) may exist between the two ears. The size of the effect is less if the signal and noise sources get closer, or if frequency spectra are narrow, and may not exceed a few dB.

With binaural hearing, whether normal or aided, the head shadow provides an advantage by sheltering the ear turned toward the source of an important sound from noise from the other side. Subjects need only listen to the ear receiving the less noisy signal. Conversely, subjects with unilateral hearing loss are at a disadvantage every time the important sound comes from the impaired side, even in silent surroundings, and the disadvantage increases in the presence of diffuse background noise.

A consequence of the ability for the central auditory system to extract spatial cues from the analysis of ITD and ILD is that when a target sound signal and competing sounds (noise) get spatially separated, a spatial release from masking emerges. Assuming that the signal and noise come from two sources at the same place, their relative levels having been adjusted, so that the target signal is masked, and when the noise source moves to a different place, the target may become audible again, which indicates a release from masking in relation to spatial separation of sources. This effect is also called binaural squelch, binaural unmasking or Hirsh effect [Hirsh, 1948]. In the seminal paper by Hirsh, the effect was reported to be maximum in the N₀Sπ configuration with a tone out of phase in the two ears (Sπ), and a noise in phase (N₀), for a tone frequency around 250 Hz, and it decreased with increasing frequency to about 3 dB at approximately 1,500 Hz and higher. Indeed, in a vast variety of situations, the masking level difference varies from a few dB, for single, non-speech maskers, up to 12 dB for multiple maskers that carry speech contents [Jones and Litovsky, 2011]. Numerous models of the binaural release from masking have been proposed and the exact neural mechanism by which signals are mathematically combined, lateralised or cross-correlated, is unknown.

Binaural benefits are relevant in the simple situation in which noise and speech from two different sources are in competition. The measured performance is a speech recognition test in which the percentage of correctly identified items is the outcome. Uni- and bilateral conditions are compared (most easily in hearing-impaired subjects by turning off one hearing aid) in search of an advantage for the bilateral situation. The acoustic paradigms in which each of the aforementioned benefits can be evaluated are different, which allows separate evaluation of each effect.

The head shadow benefit can be derived from a situation in which speech is presented from the front and noise is emitted on the ipsilateral, then the contralateral side relative to the operating ear, for example an ear with a CI. The difference in speech reception thresholds in the two situations expresses the benefit experienced when the noise moves from the ipsi- to the contralateral side where the CI is shielded from the noise [Schleich et al., 2004].

For the binaural squelch effect, again speech can be presented from the front and noise from the side under consideration. This time, the tested subject wears the two implants or hearing aids. Speech recognition is compared in the unilateral condition and in the bilateral condition with the same acoustic set-up, with the tested hearing device turned on. Any improvement in speech recognition, despite the fact that the noise gets louder, is due to the binaural squelch.

To isolate the binaural-loudness summation effect, speech can be presented from the front in quiet, or speech and noise are presented from the front. The difference in percent correct scores for speech recognition in the bilateral condition compared to the one unilateral condition provides the binaural loudness summation.

Caveats are that speech intelligibility measures on which evaluations of binaural advantage are based are fraught with statistical challenges. Test-retest differences of 10% or more are found unless the lists of items used by the speech test are sufficiently long. Differences in lists may easily bias the difference between monaural and binaural performances enough to wrongly conclude an advantage when it does not exist, or an absence of an advantage when it is genuine [Dillon, 2001]. A second caveat is that most of the papers that report significant binaural advantages derive mean advantages from cohort studies, whereas inspection of individual results may not reveal any effect big enough to be noticeable [Dillon, 2001].

In adult patients, depending on the time course of hearing thresholds across frequencies in each ear, large asymmetries may have appeared that hearing aid fitting or cochlear implantation have not necessarily smoothed out. The question thus arises whether genuine binaural benefits can be achieved in situations such that fusion of the information from the same source coming from the two ears is difficult or impossible. Among the three aforementioned benefits, only the head shadow effect does not rely upon fused information, while the binaural summation and binaural squelch effects require it. Furthermore, after years of having to process asymmetric inputs, the neural circuitry may have been affected by plasticity so that time and training may be needed before binaural advantages recover [Keating and King, 2013].

In the absence of the clear physiological background that intensive ongoing research strives to establish, e.g. Kan and Litovsky [2014], there is empirical evidence that true binaural benefits, even if they remain modest and below normal, survive in a large number of asymmetric situations. The published series encompass bilateral CI, a CI on one side and a conventional hearing aid in the contralateral ear, with all possible combinations of electric or bimodal stimulations, and various degrees of restoration of hearing sensitivities across the frequency spectrum. Among the most recent publications, the head shadow effect, binaural summation, binaural squelch and improved sound localisation have been reported, with good, subjective benefit as well [Laszig et al., 2004; Potts et al., 2009; Firszt et al., 2012; Hua et al., 2012; Morera et al., 2012; Dwyer et al., 2014], even though most of these reports analysed patients in whom one ear was stimulated electrically and the other ear acoustically. An example that illustrates how a significant effect may still be a modest one, which emerges only when large cohorts are analysed, is provided by the report of a significant bimodal benefit in a sample of 141 patients fitted with an implant on one side and a conventional hearing aid on the other side [Illg et al., 2014]. Residual hearing was found to significantly correlate with the patients' benefits, but the percentage of variance explained by this correlation was between 5 and 10%, which indicates that the performance almost entirely depended on other, unknown factors.

The previous clinical report [Illg et al., 2014] and many others identify another bilateral advantage, which, although pertaining to binaural summation, is important enough to be singled out. The use of low-frequency information relating to the fundamental frequency of a complex sound (e.g. the speech of one speaker), even when provided by only one of the two ears (e.g. one receiving an acoustic stimulation while the other ear is stimulated only electrically by a CI, which does not preserve pitch information) improves the subjective feeling of ‘spatial hearing', the perception of music and the detection of speech in noise through the cumulative use of binaural summation and monaural extraction of pitch, likely an essential element for the formation of a stream in the presence of noise.

In summary, at present, encouraging results showing binaural benefits have been found even in situations in which the differences between the cues provided by the two ears might have seemed too large for restoring binaural mechanisms. Work is in progress [Kan and Litovsky, 2014] to improve our knowledge on neuronal mechanisms of binaurality, which should inspire new strategies not only for binaural processing in hearing devices, but perhaps also for training newly equipped patients to better exploit their restored binaurality.

No conflict of interest reported.

Akeroyd MA, Guy FH: The effect of hearing impairment on localization dominance for single-word stimuli. J Acoust Soc Am 2011;130:312-323.
Brungart DS, Rabinowitz WM: Auditory localization of nearby sources. Head-related transfer functions. J Acoust Soc Am 1999;106:1465-1479.
Dillon H: Hearing Aids. Sydney, Boomerang Press, 2001, pp 370-403.
Dwyer NY, Firszt JB, Reeder RM: Effects of unilateral input and mode of hearing in the better ear: self-reported performance using the speech, spatial and qualities of hearing scale. Ear Hear 2014;35:126-136.
Firszt JB, Holden LK, Reeder RM, Cowdrey L, King S: Cochlear implantation in adults with asymmetric hearing loss. Ear Hear 2012;33:521-533.
Fletcher H, Munson WA: Loudness, its definition, measurement and calculation. J Acoust Soc Am 1933;5:82-108.
Grothe B, Pecka M, McAlpine D: Mechanisms of sound localization in mammals. Physiol Rev 2010;90:983-1012.
Hirsh IJ: The influence of interaural phase on interaural summation and inhibition. J Acoust Soc Am 1948;20:536-544.
Hua H, Johansson B, Jönsson R, Magnusson L: Cochlear implant combined with a linear frequency transposing hearing aid. J Am Acad Audiol 2012;23:722-732.
Illg A, Bojanowicz M, Lesinski-Schiedat A, Lenarz T, Büchner A: Evaluation of the bimodal benefit in a large cohort of cochlear implant subjects using a contralateral hearing aid. Otol Neurotol 2014;35:e240-e244.
Jeffress LA: A place theory of sound localization. J Comp Physiol Psychol 1948;41:35-39.
Jones GL, Litovsky RY: A cocktail party model of spatial release from masking by both noise and speech interferers. J Acoust Soc Am 2011;130:1463-1474.
Kan A, Litovsky RY: Binaural hearing with electrical stimulation. Hear Res 2014, DOI: 10.1016/j.heares.2014.08.005.
Keating P, King AJ: Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 2013;7:123.
Kidd SA, Kelly JB: Contribution of the dorsal nucleus of the lateral lemniscus to binaural responses in the inferior colliculus of the rat: interaural time delays. J Neurosci 1996;16:7390-7397.
Laszig R, Aschendorff A, Stecker M, Müller-Deile J, Maune S, Dillier N, Weber B, Hey M, Begall K, Lenarz T, Battmer RD, Böhm M, Steffens T, Strutz J, Linder T, Probst R, Allum J, Westhofen M, Doering W: Benefits of bilateral electrical stimulation with the nucleus cochlear implant in adults: 6-month postoperative results. Otol Neurotol 2004;25:958-968.
Morera C, Cavalle L, Manrique M, Huarte A, Angel R, Osorio A, Garcia-Ibañez L, Estrada E, Morera-Ballester C: Contralateral hearing aid use in cochlear implanted patients: multicenter study of bimodal benefit. Acta Otolaryngol 2012;132:1084-1094.
Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F: Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J Am Acad Audiol 2009;20:353-373.
Rayleigh L: On our perception of sound direction. Philos Mag 1907;13:214-232.
Schleich P, Nopp P, D'Haese P: Head shadow, squelch, and summation effects in bilateral users of the MED-EL COMBI 40/40+ cochlear implant. Ear Hear 2004;25:197-204.
Shaw EA: Transformation of sound pressure level from the free field to the eardrum in the horizontal plane. J Acoust Soc Am 1974;56:1848-1861.