With the growing number of older adults receiving cochlear implants (CI), there is general agreement that substantial benefits can be gained. Nonetheless, variability in speech perception performance is high, and the relative contribution and interactions among peripheral, central-auditory, and cognitive factors are not fully understood. The goal of the present study was to compare auditory-cognitive processing in older-adult CI recipients with that of older normal-hearing (NH) listeners by means of behavioral and electrophysiologic manifestations of a high-load cognitive task. Auditory event-related potentials (AERPs) were recorded from 9 older postlingually deafened adults with CI (age at CI >60) and 10 age-matched listeners with NH, while performing an auditory Stroop task. Participants were required to classify the speaker's gender (male/female) that produced the words ‘mother' or ‘father' while ignoring the irrelevant congruent or incongruent word meaning. Older CI and NH listeners exhibited comparable reaction time, performance accuracy, and initial sensory-perceptual processing (i.e. N1 potential). Nonetheless, older CI recipients showed substantially prolonged and less efficient perceptual processing (i.e. P3 potential). Congruency effects manifested in longer reaction time (i.e. Stroop effect), execution time, and P3 latency to incongruent versus congruent stimuli in both groups in a similar fashion; however, markedly prolonged P3 and shortened execution time were evident in older CI recipients. Collectively, older adults (CI and NH) employed a combined perceptual and postperceptual conflict processing strategy; nonetheless, the relative allotment of perceptual resources was substantially enhanced to maintain adequate performance in CI recipients. In sum, the recording of AERPs together with the simultaneously obtained behavioral measures during a Stroop task exposed a differential time course of auditory-cognitive processing in older CI recipients that was not manifested in the behavioral end products of processing. These data may have implications regarding clinical evaluation and rehabilitation procedures that should be tailored specifically for this unique group of patients.

With the continuous increase in life expectancy, a growing number of older adults are in need of hearing habilitation by means of cochlear implants (CI). It is generally agreed that postlingually deafened older adults benefit from CI, and a growing body of evidence shows improvements in speech perception, communication, and quality of life after implantation [e.g. see review by Clark et al., 2012; Cloutier et al., 2014]. Nonetheless, wide variability in speech perception performance is reported with some studies showing similar [e.g. Budenz et al., 2011], and others reporting diminished outcome [e.g. Roberts et al., 2013] compared to that of young adults with CI.

The benefits derived from CI are routinely assessed by means of speech perception tests presented in quiet and in noise. These tests are sensitive, to some extent, to the difficulties encountered by older CI recipients but do not tap into the cognitive aspects of speech recognition that are expended during communication and are known to decline with increasing age. Prominent among these are deficits in attention, general speed of processing, memory, and inhibitory capabilities [see review by Hasher and Zacks, 1988; Pichora-Fuller and Singh, 2006] that together with central auditory processing have been shown to predict a significant proportion of the variance in speech perception in challenging listening conditions [Anderson et al., 2013].

A potential objective means for assessing auditory processing capabilities with a CI are electrophysiological measures. Specifically, auditory event-related potentials (AERPs) allow evaluation of the time course of cortical information processing, from early perceptual to later postperceptual, cognitive stages. Thus far, AERP data provided compelling evidence regarding the timing and strength of the neural events underlying speech perception in younger postlingually deafened CI recipients [Beynon et al., 2005; Henkin et al., 2009]. In most studies, tones and/or speech stimuli (vowels, syllables) were implemented in oddball discrimination tasks. For example, Henkin et al. [2009] presented a hierarchical set of syllables that differed by one phonetic contrast and showed comparable P3 potentials in CI recipients and normal-hearing (NH) controls when the acoustic cues to the perception of the phonetic contrast were accessible (e.g. vowel place). Only when accessibility to the essential temporal and/or spectral cues was reduced, as in the place of articulation contrast, did CI recipients exhibit delayed (prolonged P3 latencies) and less synchronous (reduced amplitudes) central speech-sound processing compared to NH listeners. Thus, despite the contribution of top-down processes (i.e. stored mental representation of phonetic categories that were acquired prior to the loss of hearing), compromised bottom-up processes (i.e. impoverished acoustic information transmitted via the CI) resulted in aberrant speech processing in young CI recipients.

To date, AERPs have not been used specifically to study auditory processing in older adults with CI. The working premise of the current study was that by increasing task complexity and degree of cognitive load, AERPs may expose auditory-cognitive processing difficulties of older adults with CI that do not manifest in tasks that tax cognitive control processes, to a lesser extent, like acoustic-phonetic discrimination tasks. A task developed specifically to evaluate the ability to attend selectively to a targeted dimension while ignoring the irrelevant, conflicting dimension is the Stroop task [Stroop, 1935]. Recently, we constructed an auditory Stroop task in Hebrew where listeners were required to classify the gender of the speaker: male or female, that produced two meaningful words: father /aba/ or mother /ima/, while ignoring the word's meaning [Henkin et al., 2010]. Stimuli were either congruent (male speaker producing the word ‘father') or incongruent (male speaker producing the word ‘mother'). Data from a group of young NH listeners indicated a significant behavioral Stroop effect that manifested in prolonged reaction time to incongruent versus congruent stimuli. In contrast, AERP's latencies were unaffected by congruency, supporting the notion that conflict processing took place predominantly during postperceptual, response selection and execution stages [Henkin et al., 2010]. Hence, simultaneous acquisition of behavioral and AERP measures unraveled the auditory-cognitive processing strategy used by young NH listeners. The goal of the present study was to compare auditory-cognitive processing of older adult CI recipients to that of older NH listeners by means of AERPs and the simultaneously obtained behavioral measures during a Stroop task.

Subjects. Among older (≥60 years) postlingually deafened adults that were implanted at the Sheba Medical Center, Tel Hashomer, and were using their CI for at least 1 year, 9 recipients agreed to participate in the study and fulfilled the following inclusion criteria: (1) no history of psychiatric, cognitive illness, brain damage, stroke, or any central nervous system disorders; (2) performance within the normal range in the Mini Mental State Examination [Folstein et al., 2002], a questionnaire test that is commonly used to screen for cognitive impairment, and the Digit span test [Wechlser, 1997] known to reflect auditory attention and short-term retention capacity [Lezak et al., 2004].

The mean age at implantation was 66.1 years (range 60-78), and the mean age at testing was 71.5 years (range 64.1-83.9). Individual background information is provided in table 1. Ten older adults with NH for their age [Engdahl et al., 2005] that fulfilled the inclusion criteria participated as controls; their mean age at testing was 70.4 years (range 65-83). Group mean scores of the CI and NH listeners in the cognitive screening tests were comparable (table 1). Seventeen participants were right handed and 2 CI recipients were left handed.

Table 1

Background information for CI and NH listeners

Background information for CI and NH listeners
Background information for CI and NH listeners

Stimuli. The Stroop task included two types of stimuli: (1) congruent stimuli that were the Hebrew vowel-consonant-vowel words /aba/ (father) and /ima/ (mother) produced by a male and female speaker, respectively; (2) incongruent stimuli that were the words /aba/ produced by a female speaker and the word /ima/ produced by a male speaker. Stimuli, produced by 2 adult (male and female) native Hebrew speakers, were digitally recorded at 44-kHz sampling rate and 16-bit quantization using Sound Forge 4.5. From a large sample of naturally produced stimuli, the final set of 4 words with a duration of 375 ms each, were selected. Based on our previous experience regarding the effect of stimulus duration on the CI electrical artifact, each word was shortened from 375 to 300 ms using the Pitch-Synchronous Overlap and Add method implemented within the PRATT 5.3.39 software package. All stimuli had similar vowel and consonant durations (initial vowel duration of 113-120 ms; consonant duration of 78-100 ms; final vowel duration of 84-110 ms). The average fundamental frequency of the two words produced by each of the speakers was stable within each word [male: /aba/ - 92 Hz, /ima/ - 100 Hz; female: /aba/ - 180 Hz, /ima/ - 190 Hz]. Stimuli were presented every 2 s at 62 dB SPL via a speaker located 1 m in front of the subject. All participants reported that the stimulation level was comfortable.

AERP Recordings. Brain electrical activity was recorded from 32 sites on the scalp using electrocap tin electrodes that were placed according to the 10-20 system referenced to the chin. The impedance measured for each electrode was lower than 5 kΩ. A ground electrode was placed on the mastoid contralateral to the CI. Eye movements were monitored by electrodes above and below the right eye. Potentials were amplified from the EEG (100,000×) and electrooculogram (20,000×) channels, digitized with a 12-bit A/D converter at a rate of 1,000 samples/s, filtered (0.1-100 Hz, 6 dB/octave slopes) and stored for off-line analysis. The recording window consisted of a 200-ms prestimulus period and 1,800-ms poststimulus time.

Procedure. After electrode application, subjects were seated in a comfortable armchair in a sound-proof room and were instructed to avoid excessive eye and facial movements during recordings. Subjects were instructed to identify the speaker's gender by pressing one of two possible buttons (1 male, 2 female) on the response box and to give equal consideration to accuracy and speed. Two hundred stimuli divided into two blocks of 100 stimuli were presented. The appearance probability of each stimulus was 0.25. Stimuli presentation order was pseudo-random, so that not more than two identical stimuli were presented consecutively. The order of the two blocks were counterbalanced across subjects.

Data Analysis. Record by record inspection was utilized in order to identify and manually remove single records contaminated with artifacts during the time window of interest. Eye blinks that appeared in the electrooculogram signal were regressed out of the EEG using an eye movement correction procedure that was performed off-line. AERP data from one CI subject was excluded from the analysis as recordings were significantly contaminated by eye movements and myogenic artifacts. EEG data of 2 CI recipients that included the CI electrical artifact with amplitudes exceeding ±150 μV were subjected to independent component analysis (ICA) by means of the EEGLAB 4.5 [Delorme and Makeig, 2004] running in the MATLAB environment. The ICA was applied to remove the CI artifact according to Gilley et al. [2006]. The potentials N1, P3, and N4 were identified based on their latency, amplitude, and scalp distribution.

The latencies and amplitudes of N1, P3, and N4 as well as performance accuracy, reaction time, and execution time (time elapsed from P3 to reaction time) were subjected to multivariate analysis of variance with repeated measures using the mixed procedure for testing the effects of group (CI vs. NH) and congruency (congruent vs. incongruent).

Behavioral Measures. Mean reaction time and performance accuracy to incongruent and congruent stimuli in the NH and CI groups are presented in table 2. A significant main effect of congruency indicated longer reaction time and reduced performance accuracy to incongruent versus congruent stimuli [F(1, 17) = 155, p < 0.0001; F(1, 17) = 14.1, p = 0.002, respectively]. The main effect of group and the group × congruency interaction were not significant (reaction time: p = 0.46, 0.74; performance accuracy: p = 0.1, 0.63, respectively).

Table 2

Mean group reaction time, percent correct, and execution time to congruent and incongruent stimuli in CI and NH listeners

Mean group reaction time, percent correct, and execution time to congruent and incongruent stimuli in CI and NH listeners
Mean group reaction time, percent correct, and execution time to congruent and incongruent stimuli in CI and NH listeners

Auditory Event-Related Potentials. In the NH group, N1 and P3 were identified in 100 and 95% of recordings, respectively. Grand average waveforms to congruent and incongruent stimuli of the NH listeners that did not exhibit N4 are depicted in figure 1a (n = 7). In the CI group, N1 and P3 were identified in 87 and 94% of recordings, respectively. Figure 1b shows representative waveforms to congruent and incongruent stimuli from CI subject 7. Only 30% of NH listeners and 25% of CI recipients exhibited the N4 potential. Figure 1c, d depicts waveforms that include N4 as seen in NH subject 3 and CI subject 6. Scalp distribution in both groups was frontal for N1 and N4, and parietal for P3.

Fig. 1

a Grand average waveforms to congruent (grey) and incongruent (black) stimuli from three midline electrodes (Fz, Cz, Pz) from older NH listeners that did not exhibit the N4 potential (n = 7). b Averaged waveforms from CI subject 7 depicting N1 and P3. c NH subject 3 depicting N1, P3, and N4. d CI subject 6 depicting N1, P3, and N4. Stimulus onset is indicated by the upward arrow.

Fig. 1

a Grand average waveforms to congruent (grey) and incongruent (black) stimuli from three midline electrodes (Fz, Cz, Pz) from older NH listeners that did not exhibit the N4 potential (n = 7). b Averaged waveforms from CI subject 7 depicting N1 and P3. c NH subject 3 depicting N1, P3, and N4. d CI subject 6 depicting N1, P3, and N4. Stimulus onset is indicated by the upward arrow.

Close modal

Group mean N1 and P3 latencies and amplitudes are presented in table 3. Significant main effects of group and congruency on P3 latency indicated longer latency in the CI versus the NH group [F(1, 16) = 33.2, p < 0.0001], and to incongruent versus congruent stimuli [F(1, 14) = 6.5, p = 0.026]. The group × congruency interaction was not significant (p = 0.8). A marginally significant main effect of group on P3 amplitude indicated smaller amplitudes in CI versus NH listeners [F(1, 16) = 3.97, p = 0.06]. Furthermore, a significant main effect of congruency on P3 amplitude indicated smaller amplitudes to incongruent versus congruent stimuli [F(1, 14) = 9.4, p = 0.008]. A significant group × congruency interaction indicated that while amplitudes to incongruent and congruent stimuli were comparable in CI listeners, amplitudes to incongruent stimuli were smaller in NH listeners [F(1, 14) = 9.4, p = 0.008]. The main effects of group and congruency on N1 latency and amplitude were not significant (p ≥ 0.1).

Table 3

Mean group latencies and amplitudes of N1 and P3 elicited by congruent and incongruent stimuli in CI and NH listeners

Mean group latencies and amplitudes of N1 and P3 elicited by congruent and incongruent stimuli in CI and NH listeners
Mean group latencies and amplitudes of N1 and P3 elicited by congruent and incongruent stimuli in CI and NH listeners

Mean N4 latencies and amplitudes for NH were: incongruent 733 ms (SD 111), -6 µV (SD 1.8); congruent 673 ms (SD 61), -5.9 µV (SD 3.3), and for CI listeners: incongruent 836 ms (SD 196), -3.1 µV (SD 0.5); congruent 820 ms (SD 237), -4 µV (SD 0.1). Statistical analysis was not applied due to the small sample size.

Execution Time. Mean execution time to incongruent and congruent stimuli of the NH and CI groups is presented in table 2. A significant main effect of congruency indicated longer execution time to incongruent versus congruent stimuli [F(1, 14) = 27.8, p = 0.0001]. Furthermore, a significant main effect of group indicated that execution time was shorter in CI versus NH listeners [F(1, 14) = 9.9, p = 0.007]. The group × congruency interaction was not significant (p = 0.5).

Postlingually deafened, older CI recipients exhibited a significant Stroop effect that was similar in magnitude to that of age-matched NH listeners. Moreover, performance accuracy, reaction times, and initial sensory-perceptual processing, as manifested in N1 latencies, were comparable in the two groups. Nonetheless, older CI recipients exhibited substantially prolonged and less efficient perceptual processing, as manifested in P3 latencies, and employed a differential conflict processing strategy.

The finding of similar Stroop effect magnitude, reaction time, and performance accuracy in the studied groups highlights the substantial benefit gained from the CI device by older recipients. Furthermore, the comparable N1 latency and amplitude in the two groups reflect similar cortical detection and encoding of the physical characteristics of the stimulus [Näätänen and Picton, 1987]. Converging bottom-up and top-down information required at this early stage of cortical processing may underlie this finding.

A significant difference between older CI recipients and NH listeners manifested in prolonged and diminished P3. Similarly, young postlingually deafened CI recipients showed prolonged and reduced P3 when compared to age-matched NH listeners while performing oddball discrimination tasks that included speech stimuli with low accessibility to the essential temporal and/or spectral cues [e.g. Beynon et al., 2005; Henkin et al., 2009]. Numerous studies using a wide variety of stimuli imply that P3 reflects discrimination, categorization, and closure of the stimulus evaluation process [see review by Polich, 2007]. Shorter P3 latency and larger amplitude are associated with highly synchronized neural activity that is evident when task demands are relatively simple. In contrast, with increasing acoustic-phonetic, semantic and cognitive demands, longer latencies and smaller amplitudes have been reported [Henkin et al., 2009; Polich 2007]. Moreover, P3 has been suggested to reflect inhibitory activity that restricts processing of interfering, irrelevant events. As infrequent, low-probability stimuli can be of importance and relevance, it is adaptive to inhibit unrelated activity and, thus, increase neural synchronization that results in larger P3 amplitude to target stimuli [Polich, 2007]. On the other hand, high cognitive demand, such as the one imposed by a Stroop task, may limit attentional resources that resist inhibitory control resulting in smaller P3. Taken together, the finding of prolonged P3 and comparable N1 latencies supports the notion of delayed processing time in older CI recipients compared to age-matched NH listeners.

In the current auditory Stroop task, listeners were required to identify the speaker's gender that is predominantly based on the perception of the fundamental frequency (F0) (male: 92-100 Hz; female: 180-190 Hz). Our group of patients, using a variety of CI devices, exhibited high levels of performance accuracy (range 72-100%) that were similar to those reported in previous studies, also showing that gender identification was not significantly affected by type of CI device [Landwehr et al., 2014]. Although listeners in the current study were instructed to focus on gender identification and to ignore word meaning, it is plausible that implicit linguistic processing (i.e. acoustic-phonetic, semantic) took place, even though it was not explicitly required. Interestingly, fMRI data in young NH listeners have shown no significant recruitment of the voice processing cortical region during a linguistic task, whereas during a voice recognition task, linguistic processing regions were activated. The authors suggested that the analysis of vocal features cannot be accomplished as an isolated process but only in addition to ongoing implicit verbal analysis [Kriegstein et al., 2003]. It is plausible that CI recipients allocated even greater linguistic/perceptual resources, compared to NH listeners, to maintain high performance accuracy. This notion is supported by PET data showing that despite similar performance of CI recipients and NH controls, CI recipients allocated more neuronal resources to acoustic and early phonologic stages at the expense of late phonological and semantic processing [Giraud et al., 2000].

The utilization of the current approach in which behavioral and electrophysiologic measures were obtained simultaneously exposed a differential, effortful time course of auditory-cognitive processing in older CI recipients that did not manifest in the behavioral end products of processing, reaction time and performance accuracy. As depicted in figure 2, congruency effects manifested in reaction time (the Stroop effect), execution time, and P3 latency in both groups in a similar fashion; however, substantially prolonged P3 and shortened execution time suggest that CI recipients employed greater perceptual efforts to maintain adequate performance. Taken together, older adults (CI and NH) utilized a combined perceptual and postperceptual conflict processing strategy; nonetheless, the relative allotment of perceptual resources was substantially enhanced, and the response selection and execution processes were significantly truncated in older CI recipients. Interestingly, in agreement with the Stroop parallel processing model of MacLeod and MacDonald [2000], young NH listeners employed a predominantly postperceptual strategy that manifested in significant congruency effects on reaction time with no effects on AERPs latencies [Henkin et al., 2010]. We, therefore, assume that listening becomes more effortful when perception is afflicted by age-related ‘normal' changes in bottom-up and top-down processing [Pichora-Fuller and Singh, 2006] and to a significantly greater extent when it is compromised by profound hearing loss habilitated by a CI.

Fig. 2

The time course (in ms) of auditory-cognitive processing to congruent and incongruent stimuli in older CI recipients and older NH listeners. The bars represent mean N1 latency, P3 latency, and reaction time (RT). Execution time (i.e. time elapsed from P3 to reaction time) is depicted by the broken line.

Fig. 2

The time course (in ms) of auditory-cognitive processing to congruent and incongruent stimuli in older CI recipients and older NH listeners. The bars represent mean N1 latency, P3 latency, and reaction time (RT). Execution time (i.e. time elapsed from P3 to reaction time) is depicted by the broken line.

Close modal

Support to an age-related effect on auditory-cognitive processing also manifested in AERP morphology. Whereas a robust frontal N4 potential was evident in 15/16 young NH listeners performing the auditory Stroop task [Henkin et al., 2010], only 3 older NH listeners and 2 older CI recipients of the current study exhibited N4. The absence of N4, known to reflect semantic processing, contextual integration and ease of accessing long-term memory [Kutas and Federmeier, 2000] in most participants, is supported by an age-related reduction in N400 in older NH subjects reported by Kutas and Iragui [1998]. Slower processing time and less efficient inhibitory mechanisms may lead to poor integration as indexed by a smaller or absent N4. This finding requires further substantiation and clarification regarding putative underlying mechanisms.

Finally, data acquired in the current study from older CI and older NH listeners together with data from young CI and young NH recipients that we are currently collecting, may delineate the relative contribution of age, hearing loss habilitated by CI, and the interaction between the two. From a clinical perspective, with the growing number of older adults in the general population and those in need of CI in particular, such data may have implications regarding clinical evaluation and rehabilitation procedures that should be tailored specifically for this unique group of patients.

The authors thank Dr. S. Gilat for technical assistance and E. Shabtai for statistical analysis.

This study was supported by MED-EL.

1.
Anderson S, White-Schwoch T, Parbery-Clark A, Kraus N: A dynamic auditory-cognitive system supports speech-in-noise perception in older adults. Hear Res 2013;300:18-32.
2.
Beynon AJ, Snik AF, Stegeman DF, Van den Broek P: Discrimination of speech sound contrasts determined with behavioral tests and event-related potentials in cochlear implant recipients. J Am Acad Audiol 2005;16:42-53.
3.
Budenz CL, Cosetti MK, Coelho DH, Birenbaum B, Babb J, Waltzman SB, Roehm PC: The effects of cochlear implantation on speech perception in older adults. J Am Geriatr Soc 2011;59:446-453.
4.
Clark JH, Yeagle J, Arbaje AI, Lin FR, Niparko JK, Francis HW: Cochlear implant rehabilitation in older adults: literature review and proposal of a conceptual framework. J Am Geriatr Soc 2012;60:1936-1945.
5.
Cloutier F, Bussieres R, Ferron P, Cote M: OCTO ‘Outcomes of cochlear implant for the octogenarians: audiologic and quality-of-life'. Otol Neurotol 2014;35:22-28.
6.
Delorme A, Makeig S: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 2004;134:9-21.
7.
Engdahl B, Tambs K, Borchgrevink HM, Hoffman HJ: Screened and unscreened hearing threshold levels for the adult population: results from the Nord-Trøndelag Hearing Loss Study. Int J Audiol 2005;44:213-230.
8.
Folstein MF, Folstein SE, Fanjiang G: Mini-Mental State Examination (MMSE). Lutz, Psychological Assessment Resources, 2002.
9.
Gilley PM, Sharma A, Dorman M, Finley CC, Panch AS, Martin K: Minimization of cochlear implant stimulus artifact in cortical auditory evoked potentials. Clin Neurophysiol 2006;117:1772-1782.
10.
Giraud AL, Truy E, Frackowiak RS, Gregoire MC, Pujol JF, Collet L: Differential recruitment of the speech processing system in healthy subjects and rehabilitated cochlear implant patients. Brain 2000;123:1391-1402.
11.
Hasher L, Zacks RT: Working memory, comprehension, and aging: a review and a new view; in Bower GH (Ed): The Psychology of Learning and Motivation. San Diego, Academic Press, 1988, vol 22, pp 193-225.
12.
Henkin Y, Tetin-Schneider S, Hildesheimer M, Kishon Rabin L: Cortical neural activity underlying speech perception in post-lingual adult cochlear implant recipients. Audiol Neurotol 2009;14:39-53.
13.
Henkin Y, Yaar-Soffer Y, Gilat S, Muchnik C: Auditory conflict processing: behavioral and electrophysiological manifestations of the Stroop effect. J Am Acad Audiol 2010;21:474-486.
14.
Kriegstein K, Eger E, Kleinschmidt A, Girayd AL: Modulation of neural responses to speech by directing attention to voices or verbal content. Brain Res Cogn Brain Res 2003;17:48-55.
15.
Kutas M, Federmeier KD: Electrophysiology reveals semantic memory use in language comprehension. Trends Cogn Sci 2000;4:463-470.
16.
Kutas M, Iragui V: The N400 in a semantic categorization task across 6 decades. Electroencephalogr Clin Neurophysiol 1998;108:456-471.
17.
Landwehr M, Furstenberg D, Walger M, von Wedel H, Meister H: Effects of various electrode configurations on music perception, intonation and speaker gender identification. Cochlear Implants Int 2014;15:27-35.
18.
Lezak M, Howieson DB, Loring DW: Neuropsychological Assessment, ed 4. New York, Oxford University Press, 2004.
19.
Näätänen R, Picton T: The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 1987;24:375-425.
20.
Pichora-Fuller MK, Singh G: Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation. Trends Amplif 2006;10:29-59.
21.
Polich J: Updating P300: an integrative theory of P3a and P3b. Clin Neurophysiol 2007;118:2128-2148.
22.
Roberts DS, Lin HW, Herrmann BS, Lee DJ: Differential cochlear implant outcomes in older adults. Laryngoscope 2013;123:1952-1956.
23.
Stroop JR: Studies of interference in serial verbal reactions. J Exp Psychol 1935;18:643-662.
24.
Wechsler D: Wechsler Adult Intelligence Scale-III. London, Psychological Corporation, 1997.
Open Access License / Drug Dosage / Disclaimer
Open Access License: This is an Open Access article licensed under the terms of the Creative Commons Attribution-NonCommercial 3.0 Unported license (CC BY-NC) (www.karger.com/OA-license), applicable to the online version of the article only. Distribution permitted for non-commercial purposes only.
Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug.
Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.