Introduction: Contralateral routing of signals (CROS) overcomes the head shadow effect by redirecting speech signals from the contralateral ear to the better-hearing cochlear implant (CI) ear. Here we tested the performance of an adaptive monaural beamformer (MB) and a fixed binaural beamformer (BB) using the CROS system of Advanced Bionics. Methods: In a group of 17 unilateral CI users, we evaluated the benefits of MB and BB for speech recognition by measuring speech reception threshold (SRT) with and without beamforming. MB and BB were additionally evaluated with signal-to-noise ratio (SNR) measurements using a KEMAR manikin. We also assessed the effect of residual hearing in the CROS ear on the benefits of MB and BB. Speech was delivered in front of the listener in a background of homogeneous 8-talker babble noise. Results: With CI-CROS in omnidirectional settings with the T-mic active on the CI as a reference, BB significantly improved SRT by 1.4 dB, whereas MB yielded no significant improvements. The difference in effects on SRT between the two beamformers was, however, not significant. SNR effects were substantially larger, at 2.1 dB for MB and 5.8 dB for BB. CI-CROS with default omnidirectional settings also improved SRT and SNR by 1 dB over CI alone. Residual hearing did not significantly affect beamformer performance. Discussion: We recommend the use of BB over MB for CI-CROS users. Residual hearing in the CROS ear is not a limiting factor for fitting a CROS device, although a bimodal option should be considered.

People with asymmetric hearing loss, such as users of unilateral cochlear implants (CIs), experience an attenuated signal because of the head shadow effect when the speech source is on the side of the contralateral ear. Consequently, speech can become less intelligible, especially in a noisy environment. Contralateral routing of signals (CROS) mitigates the head shadow effect by capturing contralateral sounds with a microphone and redirecting the signal to the better-hearing CI ear. CROS is especially beneficial in situations where speech is presented to the CROS side and noise to the CI ear [Taal et al., 2016; Stronks et al., 2022]. The target group for CROS in the CI population consists of unilaterally implanted patients without contralateral residual hearing. They may have a single-sided implant because of personal preference or because of restricted health insurance reimbursements [Vickers et al., 2016].

When the contralateral ear has useful residual hearing, a bimodal approach can be considered [Morera et al., 2005] by fitting a hearing aid (HA) contralaterally. Fitting recommendations for a contralateral HA in bimodal solutions vary, however, and a clear consensus is lacking on the candidature criteria. Ching [2005] recommends bimodal fitting if there is any amount of measurable residual hearing in the contralateral ear. By contrast, others have suggested fitting the HA only when pure-tone thresholds do not exceed 80 dB HL at low frequencies [El Fata et al., 2009; Illg et al., 2014]. In addition, CI users may prefer not to use a HA even if there is beneficial residual hearing contralaterally [Stronks et al., 2020]. For these reasons, investigations into the effects of contralateral residual hearing are relevant for unilateral CI users, including what a CROS solution could offer.

Directional microphones, also known as beamformers, are spatial filters that attenuate input originating from the sides and the back of an object, whereas signals in the frontal field are unaffected [Taal et al., 2016]. The Advanced Bionics CROS device includes an adaptive monaural beamformer (MB) and a fixed binaural beamformer (BB). The MB (UltraZoomTM) operates independently on the CI and CROS speech processors, automatically adjusting the shape of the cardioid to maximize attenuation in the region of the lowest signal-to-noise ratio (SNR). Via wireless voice streaming, BB (StereoZoomTM) combines the binaural signal from the CI and CROS devices to further increase spatial selectivity, as can be seen from the polar patterns (Fig. 1). The underlying algorithms have been discussed in detail elsewhere [Hehrmann et al., 2012; Stronks et al., 2022]. Both beamformers are effective in CI users [Hehrmann et al., 2012], bimodal listeners [Stronks et al., 2022], and CI-CROS users [Dorman et al., 2018; Ernst et al., 2019b; Núñez-Batalla et al., 2020]. In a study of bilateral CI users and bimodal listeners, BB performance significantly exceeded that of MB [Ernst et al., 2019a], and in separate work, we have found a greater magnitude of SNR improvement for BB compared with MB [Stronks et al., 2022]. In that previous study, however, we could not confirm the superior performance of BB when testing speech recognition thresholds (SRTs) in bimodal listeners.

Fig. 1.

Polar plots of omnidirectional microphone settings (omni, purple dashed line), the monaural beamformer (MB, solid blue line) and binaural beamformer (BB, solid red line) obtained from a KEMAR equipped with a CI on the left side and a hearing aid on the other. The stimulus was pink noise presented at 80 dB SPL. 0° was set at 0 dB. Image adapted from Stronks et al. [2022].

Fig. 1.

Polar plots of omnidirectional microphone settings (omni, purple dashed line), the monaural beamformer (MB, solid blue line) and binaural beamformer (BB, solid red line) obtained from a KEMAR equipped with a CI on the left side and a hearing aid on the other. The stimulus was pink noise presented at 80 dB SPL. 0° was set at 0 dB. Image adapted from Stronks et al. [2022].

Close modal

Our findings also suggest that the benefits of CROS for reducing the head shadow effect lessen with greater residual hearing in the CROS ear [Stronks et al., 2022]. This observation led us to hypothesize that with more residual hearing in the CROS ear, increased speech-processing ability in that ear independent of the CI ear leads to decreased CROS effectiveness. Here, by comparing SRTs with speech-weighted SNR recordings [Killion, 2010], we investigated whether residual hearing levels in the CROS ear affect beamformer performance in a population of unilateral CI users. Given that only the CI ear benefits from beamforming, we expected that listeners with more residual hearing in the CROS ear would rely more on that ear for speech recognition, attenuating the benefits of beamforming.

Study Design and Participants

This single-blinded (participants unaware of intervention), prospective study had a crossover design. Users of a unilateral Advanced Bionics (Valencia, CA, USA) CI were recruited, with varying amounts of residual hearing in the contralateral ear. The implanted ear was considered to be functionally deaf. Inclusion criteria were a CVC phoneme recognition score in quiet of at least 80% at a speech level of 65 dB SPL and at least 6 months of experience with the CI. All participants used the HiResTM Optima speech coding strategy. Any HAs in the non-implanted ear were removed during testing, and no earplugs were used. Residual hearing was expressed as the average audiometric pure-tone threshold across 500, 1,000, and 2,000 Hz (PTA500–2000) [Carhart, 1971]. Two participants had been clinically fitted with a CROS device before being recruited for the study.

Four microphone configurations were tested: CI, CI-CROS with a standard omnidirectional microphone setting, CI-CROS with adaptive MB, and CI-CROS with BB. On the day of testing, participants were fitted with a research Q90TM processor (Advanced Bionics, Valencia), using their own home-use threshold and maximum comfortable levels, and with a NaídaTM Link CROS device (Phonak, Sonova AG, Stäfa, Switzerland).

The T-mic is an omnidirectional microphone suspended from the behind-the-ear unit (BTU) of the CI speech processor to place it in front of the ear canal [Gifford and Revit, 2010]. Beamforming is achieved by means of the processor microphones that are situated on top of the BTU. Because all participants clinically used a T-mic on their CI processor, we also fitted one on the research processor. The CROS device uses only processor microphones.

The clinical acoustic filter setting of all participants was “extended low” (250–8,700 Hz). During testing, the “standard” filter setting (350–8,700 Hz) was used, however, to allow comparison with data from a previous beamforming study in bimodal users [Stronks et al., 2022]. We did not expect these filter settings to substantially affect beamforming.

Speech-in-Noise Testing

Speech recognition in noise was tested in an audiometric, sound-attenuated booth measuring 3.4 × 3.2 × 2.4 m (l × w × h). Participants were seated in the middle of the room in front of a loudspeaker (MSP5A monitor speaker, Yamaha Corp., Japan) that generated the speech stimuli. This loudspeaker was placed 1.2 m above the floor and 1.2 m from the listener, well below the critical distance. The critical distance was determined to be 2 m or more for frequencies above 500 Hz [Van der Beek et al., 2007], and reverberations of the speech stimuli were thus not expected to affect beamforming performance [Ricketts and Hornsby, 2003]. Because the walls of the booth were sound-treated, noise reverberation was expected to have little effect on the performance of the beamformers. The participants were instructed to face the frontal loudspeaker, and head movements were not allowed.

Noise was applied using eight loudspeakers (Control 1, JBL Corp., Los Angeles, CA, USA) distributed symmetrically around the booth in two planes below and above the listener. They were calibrated individually with a sound meter (Rion NA-28, Rion Co. Ltd., Tokyo, Japan) to ensure that the sound level was equally high (60 dBA) in all directions around the listener’s head. The loudspeaker generating the speech was positioned in the corner of the room at 45°, such that the loudspeakers on the sides, the front, and the back were not located at right angles from the participant (Fig. 2, and see Stronks et al. [2020] for more details on the homogeneous noise setup).

Fig. 2.

Schematic of the homogeneous noise setup. Eight loudspeakers (gray) symmetrically placed around the participant-generated noise (60 dB). A single loudspeaker placed approximately 1 m in front of the participant (orange) was used to present the speech material.

Fig. 2.

Schematic of the homogeneous noise setup. Eight loudspeakers (gray) symmetrically placed around the participant-generated noise (60 dB). A single loudspeaker placed approximately 1 m in front of the participant (orange) was used to present the speech material.

Close modal

Speech recognition in noise was assessed using the Dutch/Flemish Matrix sentence material consisting of a closed-set speech corpus of 13 lists with 20 sentences spoken by a female voice [Luts et al., 2014]. Each list was used only once per session, and the lists were randomly assigned to a test condition. The sentence order within each list was fixed. Lists 1 and 2 were used at the beginning of the session for training purposes to reduce learning effects. SRTs were measured by adaptively varying the speech level based on the procedure of Dyballa et al. [2015] executed in a MATLAB environment (2017b, MathWorks, Inc., Natick, MA, USA). Participants listened to a sentence and repeated it orally. Correctly repeated words were scored manually. Guessing was allowed, and no feedback on performance was provided to the participants.

The background noise was 8-talker babble adapted from the files produced by the International Collegium for Rehabilitative Audiology [Dreschler et al., 2001]. The original file was a dual-talker babble noise consisting of temporally modulated broadband noise with spectral characteristics resembling a male voice. Each channel represented a single talker. The noise files were semi-randomly offset to create uncorrelated noise streams that were played back from the eight loudspeakers in the booth. The babble noise was unintelligible and presented continuously throughout the tests [see Stronks et al. [2020] for more detail].

SNR Recordings with KEMAR

The physical effects of MB and BB on SNR were measured with KEMAR [Burkhard and Sachs, 1975]. Polar patterns (Fig. 1) were kindly provided by Advanced Bionics, LLC (Valencia). They were recorded in an anechoic chamber with KEMAR positioned on a turntable using pink noise delivered at 80 dBA from a loudspeaker positioned at 0°. KEMAR was equipped with a Q90 CI speech processor on the right ear and a Naída Link hearing aid on the left and was rotated in steps of 15°. Stimulus levels were recorded from the speech processor and converted to a decibel scale, with 0° as a reference. As a comparator for the human SRT data and speech-weighted KEMAR measurements from the test setup, we calculated a measure of directivity from the polar plots based on equations (3) and (4) from Chung and Zeng [2009].

For the SNR recordings in the speech recognition test setup, we used long-term speech-shaped noise from the front loudspeaker (signal) and homogeneous 8-talker babble (noise) from the distributed loudspeakers. Both stimuli were presented at 60 dBA, and the CI output to the speech and noise was recorded separately. The output of the Q90 speech processor was recorded using a DirectConnectTM module connected to a digital oscilloscope (SmartScope, Antwerp, Belgium). Recordings were band-pass filtered using cut-off frequencies resembling the ‘standard’ setting during subject testing (350–8,000 Hz). To extrapolate SNR to SRT improvements, we applied a weighting procedure [Killion and Mueller, 1990] by dividing each audio recording into 17 one-third octave bands weighted with factors [Killion and Mueller, 2010] reflecting their importance for speech recognition. The weighed root mean square (rms) values were summed and converted to rms, and the resulting SNR in dB units was calculated using 20∙10log(rmssignal/rmsnoise).

To gain polar patterns from the SNR measurements, we used a single loudspeaker, and during SNR recordings in the participant test setup, we separately recorded the long-term speech-shaped noise and 8-talker babble noise. As a result, because only a single noise source was present, MB could not establish the region with the lowest SNR, so we deployed a nonadaptive version of MB with a point of maximal noise suppression fixed at 120°. Based on directivity-index calculations, this was expected to be the state of the algorithm most optimal for speech presented frontally in a homogeneous noise field and thus the most likely configuration during participant testing. BB is most effective when speech comes from the front and noise from the sides (see polar pattern, Fig. 1).

Statistics

Data were tested for normality by applying D’Agostino and Pearson’s test to the pooled test and retest SRTs of each individual microphone setting (34 measurements per setting). To assess whether microphone setting (CI omnidirectional, CI-CROS omnidirectional, CI-CROS with MB, and CI-CROS with BB) and residual hearing affected SRTs, we applied a linear mixed model (LMM) using SPSS for Windows (version 23.0.0.0, IBM Corp., Armonk, NY, USA). The microphone setting was entered as a categorical fixed-effects factor and PTA500–2000 in the contralateral ear as a fixed-effects covariate. To account for learning effects and fatigue, we entered session number and trial number as fixed-effect covariates. Participant ID was entered as a random variable, and an intercept was included for both the fixed and random effects. The covariance type was set to unstructured.

For evaluating whether residual hearing affects the performance of the different microphone settings, we constructed a second LMM similar to the first, except with microphone setting and PTA500–2000 entered as a single interaction factor (mic setting × PTA500–2000) rather than as separate factors. We used session and trial number as covariates to include learning effects and fatigue in the model. As an integral part of the LMM procedure, post hoc t-testing was performed on the parameter estimates using Šidák’s correction for multiple comparisons in SPSS. Other LMM settings, such as the method used (restricted maximum likelihood), were left at their defaults (SPSS 23).

Test/retest variability was determined using the within-subject standard deviation and repeatability as defined by Bland and Altman [Bland and Altman, 1996], with the data pairs acquired in omnidirectional microphone settings. Test and retest were corrected for the within-session (i.e., across trial) learning effect, and the retest was corrected for between-session performance improvement.

The demographics for the 17 included participants are shown in Table 1. All participants used the HiRes Optima speech coding strategy, and six wore an HA in the non-implanted ear. The median PTA500–2000 of the non-implanted ears is shown in Figure 3. One participant (S11) had near-normal hearing in the non-implanted ear. Each microphone setting yielded normally distributed SRTs (p > 0.05). Within-subject standard deviation using all available data was determined at 1.3 dB. Using the SRT data of the CI condition only, the within-participant standard deviation was 1.9 dB, and the repeatability according to Bland and Altman [1996] was 5.3 dB, meaning that repeated measures were expected to differ by up to 5.3 dB in 95% of cases. This relatively high test/retest variability was not due only to learning effects, because correction of the SRTs for the estimated across-session learning effect using the LMM estimates (see below) only increased the magnitude of the repeatability value. We expect this was caused by relatively large between-participant differences of the learning effect.

Table 1.

Participant demographics

 Participant demographics
 Participant demographics
Fig. 3.

Median audiometric pure-tone thresholds. Gray: interquartile distances. HL: hearing loss.

Fig. 3.

Median audiometric pure-tone thresholds. Gray: interquartile distances. HL: hearing loss.

Close modal

The raw SRT data obtained at the different microphone settings are shown in Figure 4a. The corresponding SRT benefits and SNR improvement, both relative to CI-only in the omnidirectional setting, are shown in Figure 4b (dots and black lines, respectively). LMM analysis with microphone setting and PTA500–2000 included as separate factors showed that microphone setting significantly affected SRT (F = 11.81, p < 0.0001), as did PTA500–2000 (F = 6.25, p = 0.025), session number (F = 36.41, p < 0.0001), and trial number (F = 15.38, p = 0.00015). PTA500–2000 improved the SRT by 0.1 dB per dB HL. The second session yielded SRTs that were 1.8 dBA lower on average than in the first session. On average, SRT improved by 0.5 dB with each subsequent trial.

Fig. 4.

Speech reception thresholds (SRTs). a SRTs plotted against the four microphone settings tested. b Benefits relative to CI. CI: cochlear implant only (black crosses); CROS: contralateral routing of signals to the CI ear (blue circles); MB: monaural dynamic beamforming with CROS system (red squares); BB: binaural static beamforming with CROS system (green diamonds). Colored bars: SRT averages; black bars: SNR averages. **p< 0.01; ***p< 0.001; and ****p< 0.0001.

Fig. 4.

Speech reception thresholds (SRTs). a SRTs plotted against the four microphone settings tested. b Benefits relative to CI. CI: cochlear implant only (black crosses); CROS: contralateral routing of signals to the CI ear (blue circles); MB: monaural dynamic beamforming with CROS system (red squares); BB: binaural static beamforming with CROS system (green diamonds). Colored bars: SRT averages; black bars: SNR averages. **p< 0.01; ***p< 0.001; and ****p< 0.0001.

Close modal

To test whether BB outperforms MB, we conducted post hoc pairwise comparisons between the six microphone settings using t tests and Šidák’s multiple comparisons correction. Table 2 lists the results of this analysis, including p values and 95% confidence intervals. Compared against CI with omnidirectional microphone settings, CI-CROS with MB and BB significantly improved average SRT by 1.7 and 2.4 dBA, respectively. The average SRT in the CI-CROS configuration was 1.0 dB lower than with CI alone, but this difference was not significant. Using omnidirectional CI-CROS as a reference, the SRT improvement with BB (1.4 dBA) remained significant, but that of MB (0.7 dBA) did not. With a comparative SRT decrease of 0.7 dB on average, BB was not significantly more effective than MB.

Table 2.

Pairwise comparisons of the linear mixed model parameter estimates

 Pairwise comparisons of the linear mixed model parameter estimates
 Pairwise comparisons of the linear mixed model parameter estimates

The KEMAR recordings in the test setup showed that MB and BB improved SNRs by 3.2 and 6.8 dB, respectively, relative to CI in omnidirectional settings. At 1.0 dB, the SNR benefit of CI-CROS over CI in the omnidirectional mic setting was similar to the SRT improvement. Using omnidirectional CI-CROS as the reference, the SNR benefits were 2.1 dB with MB and 5.8 dB with BB. The directivity estimates calculated from the polar patterns measured anechoically were 4.8 dB with MB and 6.4 dB with BB.

To investigate whether residual hearing affects beamforming performance, we plotted SRTs against PTA500–2000, showing a trend associating lower SRTs (i.e., better speech recognition) with lower PTAs (i.e., better residual hearing) for all microphone settings (Fig. 5), as shown by the LMM above. The lowest SRTs overall were seen with BB (green line in Fig. 5), followed by MB (red line) and CI-CROS (blue line). The difference was most pronounced at high PTAs and negligible at low PTAs. This overall trend across microphone settings indicated declining beamforming performance with increasing residual hearing function. Results of the LMM with the two factors entered as a single interaction factor (mic setting × PTA500–2000) corroborated these observations, yielding a significant overall interaction (F = 11.15, p < 0.0001). However, post hoc t testing revealed no significant differences between the interaction factors of any of the six microphone setting pairs (CI vs. CI-CROS, MB, or BB; CI-CROS vs. MB or BB; and MB vs. BB; p = 1.0 in all cases, after Šidák’s correction for multiple comparisons).

Fig. 5.

Speech reception thresholds (SRTs) plotted against the average pure-tone audiometric threshold across 500, 1,000, and 2,000 Hz (PTA500–2000). CI: cochlear implant only (black crosses); CI-CROS: contralateral routing of signals to the CI ear (blue circles); MB, BB: monaural adaptive (red squares) and binaural fixed (diamonds) beamforming with respect to using the CI-CROS configuration. Lines: trend lines based on simple linear regression. Statistics as described in the main text obtained by linear mixed modeling.

Fig. 5.

Speech reception thresholds (SRTs) plotted against the average pure-tone audiometric threshold across 500, 1,000, and 2,000 Hz (PTA500–2000). CI: cochlear implant only (black crosses); CI-CROS: contralateral routing of signals to the CI ear (blue circles); MB, BB: monaural adaptive (red squares) and binaural fixed (diamonds) beamforming with respect to using the CI-CROS configuration. Lines: trend lines based on simple linear regression. Statistics as described in the main text obtained by linear mixed modeling.

Close modal

In this study, we tested the performance of two beamformers in a group of unilateral CI users fitted with a CROS device and investigated the effect of residual hearing in the CROS ear. Compared with the CI-only condition and an omnidirectional microphone setting, MB and BB both statistically significantly improved SRTs. In line with earlier findings in bimodally fitted CI users [Stronks et al., 2020], the beamformers did not differ in their benefit for SRT. Based on physical SNR recordings, however, BB substantially outperformed MB, also in agreement with our earlier data. PTA500–2000 significantly affected SRTs, but we did not find a significant effect of residual hearing on beamforming. Taken together, the results indicate a nonsignificant difference in SRT benefit between MB and BB, despite a superior SNR improvement with BB, and we found no significant effect of residual hearing on beamforming.

Adding CROS to the CI resulted in SNR and SRT improvements of 1 dB. As a result, when comparing the beamformers with CI-CROS instead of CI-only, their benefits were reduced by approximately this number. Because BB needs binaural input (i.e., both the CI and a CROS device), a comparison of BB and MB against CI-CROS (in omnidirectional settings) is relevant. BB still significantly improved the SRT (by 1.4 dB on average), but MB did not (0.7 dB). The 0.7-dB difference between BB and MB was not significant.

The KEMAR recordings in the test setup showed that MB and BB improved SNRs by 2.1 and 5.8 dB, respectively, using omnidirectional CI-CROS as the reference. The directivity estimates calculated from the polar patterns measured anechoically were 4.8 dB for MB and 6.4 dB for BB, implying a fair agreement between the SNR recording in the speech test setup and the directivity estimate from the polar plots, whereas the SRTs yielded a substantially lower performance. In our previous study with the Naída Link bimodal system, we reported largely similar SRT and SNR improvements with MB, but a BB benefit for SNR that was 3.7 dB greater than the corresponding improvement for SRT [Stronks et al., 2022].

The lack of difference in SRT benefits between MB and BB, despite the better performance of BB in SNR recordings, could trace to less effort invested by listeners when SNRs are more favorable [Sarampalis et al., 2009]. Alternatively, the mixing of the two monaural signals by BB could have eliminated binaural cues, although whether binaural cues are available to CI users is contested [Dieudonné and Francart, 2020]. Another reason may be that the nonadaptive MB variant used for the SNR recordings yielded an underestimated SNR benefit because the adaptive variant was able to steer its null toward the region with the most dominant interference [Ricketts, 2001]. However, the adaptation time constant of MB is approximately 150 ms, and we expect that this was insufficient to support effective null steering in a rapidly fluctuating noise field generated by eight uncorrelated sources of single-talker babble [Stronks et al., 2020]. Lastly, our study may have been statistically underpowered, given the relatively small SRT difference between MB and BB (0.7 dB) and the high test/retest variability; the within-subject standard deviation was 1.9 dB, and the repeatability was 5.3 dB. By feeding the post hoc multiple comparison outcomes into the online ‘summary-statistics-based power analysis’ tool from Murayama et al. [2022], using a significance level α of 0.05 and a power β of 0.8, we found an optimal, yet quite unrealistic, sample size of N = 246 to show a significant SRT difference between MB and BB with the CROS system. Clarifying among these possible reasons for the small performance difference between MB and BB requires further study. Comparison of these findings with those obtained in bilateral CI users or in a bimodally fitted population should be done with caution, however, because beamforming performance differs across device platforms. Because they have been optimized in terms of energy consumption, MB and BB in the CROS system are “lighter” versions than those installed on the CI and HA processors.

Regarding the magnitude of speech recognition benefit from MB and BB, several studies have relied on the Advanced Bionics CROS system. Two studies [Dorman et al., 2018; Núñez-Batalla et al., 2020] showed improvement, but values were expressed in percent correct scores and cannot be compared directly to our findings. Ernst et al. [2019b] investigated the effect of BB on SRT and reported an improvement of 4.4 dB compared with CI and 3.8 dB compared with CI-CROS in omnidirectional microphone settings. None of these studies compared MB and BB. Ernst et al. [2019b] reported values that are approximately 2 dB higher than the 2.4 dB we found for the comparison with CI and the 1.4 dB compared with CI-CROS. The different noise setups between the studies may explain the discordance, as Ernst et al. [2019b] included eight loudspeakers surrounding the participant, but no loudspeaker aligned with the frontal loudspeaker producing the speech. By contrast, we applied a homogeneous noise field with a substantial part of the noise co-localizing with the speech. This spatially overlapping noise is immune to beamforming and will inevitably result in poorer SNR improvements. Nevertheless, we believe that the homogeneous setup facilitates beamformer testing in a real-world environment, because noise in everyday life can come from any direction relative to the target speech, including from behind it.

The CI-CROS benefit over CI-only in omnidirectional microphone settings of 1 dB SRT did not reach significance (p = 0.14, after a Šidák’s correction for multiple comparisons), even though most participants (14 out of 17, or 82%) showed improved SRTs with CI-CROS. When performing a post hoc power analysis using the method of Murayama et al. [2022] as described above, we found that a sample size of 57 participants would have been optimal for significance testing. Thus, our study was underpowered for statistical testing of a CROS benefit. Given that a beneficial effect of CROS on SRTs obtained with the frontal speech effect has been reported before [Dwyer et al., 2019], we believe that the CI-CROS benefit observed here was genuine nonetheless. The benefit can be explained by the use of a homogeneous babble noise produced by multiple uncorrelated noise sources. The CI-CROS signal is generated by summation of the separate CI and CROS signals and dividing it by 2 (i.e., averaging). Under the assumption that the noise signals are random, averaging the CROS and CI signals results in a noise reduction of the root square of 2 [Stronks et al., 2019], or approximately 3 dB SNR. We found only 1 dB SNR, however, probably because the CI and CROS signals always partly correlate in a homogeneous field of babble noise. Dwyer et al. [2019] explained the SRT benefit of CI-CROS by overcoming a partial head shadow effect, referred to as “face shadow,” but we believe that our explanation may be more parsimonious. More research is needed to explain more definitively the CI-CROS benefit with frontal speech.

The CI and CI-CROS conditions were tested with the T-mic (the current clinical standard). MB and BB, however, operate with the processor mics. The T-mic is placed in front of the ear canal, whereas the processor microphones are located on top of the BTU. The physical location of the T-mic was intended to yield a “natural directivity” benefit [Gifford and Revit, 2010], and we have found a benefit of approximately 0.5 dB in a homogeneous noise field using the KEMAR manikin (results not shown). As a result, the benefits of beamforming reported here apply to the omnidirectional T-mic setting. When a patient uses a fitting with a processor microphone with omnidirectional settings, the benefits of MB and BB should be higher. The lack of effect of residual hearing on beamforming was convincingly nonsignificant (p = 1 after correction), and a post hoc power analysis conducted as described above [Murayama et al., 2022] yielded an unrealistic population of at least 590 participants to achieve significance.

In conclusion, we report that in a homogeneous field of multitalker babble noise, BB significantly improves speech recognition by 1.4 dB when using CI-CROS with the T-mic as a reference. MB does not significantly improve SRTs under these conditions, and residual contralateral hearing does not significantly affect the performance of the two beamformers. Given these findings, we recommend the use of BB over MB when fitting unilateral CI users with a CROS device. Residual hearing in the CROS ear is not a limitation for the performance of either beamformer, yet a bimodal solution should be considered over CROS when substantial acoustic sensitivity remains.

The authors thank the study participants for their time and dedication.

This study adhered to the tenets of Helsinki [World Medical Association, 2013] and was approved by the local IRB (METC Leiden Den Haag Delft) under protocol number P02.106. All participants provided written informed consent before enrollment.

This study was partly funded by the Crossover Program of the Dutch Research Council. Advanced Bionics (European Research Center, Hannover, Germany) provided financial support.

This study was financially and technically supported by Advanced Bionics (ERC, Hannover, Germany).

H. Christiaan Stronks: IRB approval, experimental design, data collection, analyses, and draft writing. Jeroen J. Briare and Johan H.M. Frijns: experimental design, intellectual contributions, and critical revisions of the manuscript.

The research data are not publicly available on ethical grounds. Further inquiries can be directed to the corresponding author.

1.
Bland
JM
,
Altman
DG
.
Measurement error
.
BMJ
.
1996 Sept
;
312
(
7047
):
1654
.
2.
Burkhard
MD
,
Sachs
RM
.
Anthropometric manikin for acoustic research
.
J Acoust Soc Am
.
1975 Jul
;
58
(
1
):
214
22
.
3.
Carhart
R
.
Observations on relations between thresholds for pure tones and for speech
.
J Speech Hear Disord
.
1971 Nov
;
36
(
4
):
476
83
.
4.
Ching
TY
.
The evidence calls for making binaural-bimodal fittings routine
.
Hearing J
.
2005 Nov
;
58
(
11
):
32
41
.
5.
Chung
K
,
Zeng
FG
.
Using hearing aid adaptive directional microphones to enhance cochlear implant performance
.
Hear Res
.
2009 Apr
;
250
(
1–2
):
27
37
.
6.
Dieudonné
B
,
Francart
T
.
Speech understanding with bimodal stimulation is determined by monaural signal to noise ratios: No binaural cue processing involved
.
Ear Hear
.
2020 Sept–Oct
;
41
(
5
):
1158
71
.
7.
Dorman
MF
,
Cook Natale
S
,
Agrawal
S
.
The value of unilateral CIs, CI-CROS and bilateral CIs, with and without beamformer microphones, for speech understanding in a simulation of a restaurant environment
.
Audiol Neurootol
.
2018 Dec
;
23
(
5
):
270
6
.
8.
Dreschler
WA
,
Verschuure
H
,
Ludvigsen
C
,
Westermann
S
.
ICRA noises: artificial noise signals with speech-like spectral and temporal properties for hearing instrument assessment: ruidos ICRA: señates de ruido artificial con espectro similar al habla y propiedades temporales para pruebas de instrumentos auditivos
.
Int J Audiol
.
2001 May–Jun
;
40
(
3
):
148
57
.
9.
Dwyer
RT
,
Kessler
D
,
Butera
IM
,
Gifford
RH
.
Contralateral routing of signal yields significant speech in noise benefit for unilateral cochlear implant recipients
.
J Am Acad Audiol
.
2019 Jan
;
30
(
3
):
235
42
.
10.
Dyballa
KH
,
Hehrmann
P
,
Hamacher
V
,
Nogueira
W
,
Lenarz
T
,
Buchner
A
.
Evaluation of a transient noise reduction algorithm in cochlear implant users
.
Audiol Res
.
2015 Jun 11
;
5
(
2
):
116
.
11.
El Fata
F
,
James
CJ
,
Laborde
ML
,
Fraysse
B
.
How much residual hearing is ‘useful’ for music perception with cochlear implants
.
Audiol Neurootol
.
2009 Apr
;
14
(
Suppl 1
):
14
21
.
12.
Ernst
A
,
Anton
K
,
Brendel
M
,
Battmer
RD
.
Benefit of directional microphones for unilateral, bilateral and bimodal cochlear implant users
.
Cochlear Implants Int
.
2019 May
;
20
(
3
):
147
57
.
13.
Ernst
A
,
Baumgaertel
RM
,
Diez
A
,
Battmer
RD
.
Evaluation of a wireless contralateral routing of signal (CROS) device with the advanced bionics Naída CI Q90 sound processor
.
Cochlear Implants Int
.
2019 Jul
;
20
(
4
):
182
9
.
14.
Gifford
RH
,
Revit
LJ
.
Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise
.
J Am Acad Audiol
.
2010 Jul–Aug
;
21
(
7
):
441
51
; quiz 487–8. https://doi.org/10.3766/jaaa.21.7.3.
15.
Hehrmann
P
,
Fredelake
S
,
Hamacher
V
,
Dyballa
KH
,
Buechner
A
.
Improved speech intelligibility with cochlear implants using state-of-the-art noise reduction algorithms
. ITG Symposium; Speech Communication Braunschweig, Germany.
2012
:
1
3
.
16.
Illg
A
,
Bojanowicz
M
,
Lesinski-Schiedat
A
,
Lenarz
T
,
Büchner
A
.
Evaluation of the bimodal benefit in a large cohort of cochlear implant subjects using a contralateral hearing aid
.
Otol Neurotol
.
2014 Oct
;
35
(
9
):
e240
4
.
17.
Killion
MC
,
Mueller
HG
.
Twenty years later: a NEW Count-The-Dots method
.
Hearing J
.
2010
;
63
(
1
):
10
4
.
18.
Killion
MC
,
Mueller
HG
.
Twenty years later: a NEW Count-The-Dots method
.
Hearing J
.
2010 Jan
;
63
(
1
):
10
7
.
19.
Luts
H
,
Jansen
S
,
Dreschler
W
,
Wouters
J
.
Development and normative data for the Flemish/Dutch Matrix test
.
The Netherlands
:
Katholieke universiteit Leuven, Belgium and Academic Medical Center Amsterdam
;
2014
. Unpublished article.
20.
Morera
C
,
Manrique
M
,
Ramos
A
,
Garcia-Ibanez
L
,
Cavalle
L
,
Huarte
A
,
.
Advantages of binaural hearing provided through bimodal stimulation via a cochlear implant and a conventional hearing aid: a 6-month comparative study
.
Acta Otolaryngol
.
2005 Jun
;
125
(
6
):
596
606
.
21.
Murayama
K
,
Usami
S
,
Sakaki
M
.
Summary-statistics-based power analysis: a new and practical method to determine sample size for mixed-effects modeling
.
Psychol Methods
.
2022 Jan 31
. https://doi.org/10.1037/met0000330.
22.
Núñez-Batalla
F
,
Fernández-Junquera
AB
,
Súarez-Villanueva
L
,
Díaz-Fresno
E
,
Sandoval-Menéndez
I
,
Gómez Martínez
J
,
.
Application of wireless contralateral routing of signal (CROS) technology in unilateral cochlear implant users
.
Acta Otorrinolaringol Esp
.
2020 Nov
;
71
(
6
):
333
42
.
23.
Ricketts
TA
.
Directional hearing aids
.
Trends Amplif
.
2001
;
5
(
4
):
139
76
.
24.
Ricketts
TA
,
Hornsby
BW
.
Distance and reverberation effects on directional benefit
.
Ear Hear
.
2003
;
24
(
6
):
472
84
.
25.
Sarampalis
A
,
Kalluri
S
,
Edwards
B
,
Hafter
E
.
Objective measures of listening effort: effects of background noise and noise reduction
.
J Speech Lang Hear Res
.
2009 Oct
;
52
(
5
):
1230
40
.
26.
Stronks
HC
,
Biesheuvel
JD
,
de Vos
JJ
,
Boot
MS
,
Briaire
JJ
,
Frijns
JHM
.
Test/retest variability of the eCAP threshold in advanced Bionics cochlear implant users
.
Ear Hear
.
2019 Nov/Dec
;
40
(
6
):
1457
66
.
27.
Stronks
HC
,
Briaire
J
,
Frijns
J
.
Beamforming and single-microphone noise reduction: effects on signal-to-noise ratio and speech recognition of bimodal cochlear implant users
.
Trends Hear
.
2022
;
26
:
23312165221112762
.
28.
Stronks
HC
,
Briaire
JJ
,
Frijns
JHM
.
The temporal fine structure of background noise determines the benefit of bimodal hearing for recognizing speech
.
J Assoc Res Otolaryngol
.
2020 Dec
;
21
(
6
):
527
44
.
29.
Taal
CH
,
van Barneveld
DC
,
Soede
W
,
Briaire
JJ
,
Frijns
JH
.
Benefit of contralateral routing of signals for unilateral cochlear implant users
.
J Acoust Soc Am
.
2016 Jul
;
140
(
1
):
393
.
30.
Van der Beek
FB
,
Soede
W
,
Frijns
JHM
.
Evaluation of the benefit for cochlear implantees of two assistive directional microphone systems in an artificial diffuse noise situation
.
Ear Hear
.
2007
;
28
(
1
):
99
110
.
31.
Vickers
D
,
De Raeve
L
,
Graham
J
.
International survey of cochlear implant candidacy
.
Cochlear Implants Int
.
2016
;
17
(
1
):
36
41
.
32.
World Medical Association
.
World medical association declaration of helsinki: ethical principles for medical research involving human subjects
.
JAMA
.
2013 Nov
;
310
(
20
):
2191
4
.