Music, as language, is a universal and specific trait to humans; it is a complex ability with characteristics that are unique compared to other cognitive abilities. Nevertheless, several issues are still open to debate, such as, for example, whether music is a faculty that is independent from the rest of the cognitive system, and whether musical skills are mediated by a single mechanism or by a combination of processes that are independent from one another. Moreover, the anatomical correlations of music have yet to be clarified. The goal of this review is to illustrate the current condition of the neuropsychology of music and to describe different approaches to the study of the musical functions. Hereby, we will describe the neuropsychological findings, suggesting that music is a special function carried out by different and dedicated processes that are probably subserved by different anatomical regions of the brain. Moreover, we will review the evidence obtained by working with brain-damaged patients suffering from music agnosia, a selective impairment in music recognition.
Music is the art of thinking with sounds. All our lives we are surrounded by sounds and music, beginning from the lullaby to the funeral march; it is learned quite effortlessly in the early childhood and plays a particular role in promoting social cohesion and interaction between people .
In contrast to apraxia or aphasia, impairments of music perception have received considerably less clinical attention, and yet musical disorders are not rare, since accidental vascular lesions often invade the sylvian territory where the major areas devoted to music lie. Therefore, the neural substrates underlying music information processing have not been sufficiently outlined yet, and the question of hemispheric specificity concerning the diverse components of music perception is still being discussed controversially. Several issues are still open to debate, such as, for example, whether music is a faculty that is independent from the rest of the cognitive system, and whether musical skills are mediated by a single mechanism or by a combination of processes that are independent from one another. However, recent evidence suggests that music might well be distinct from other cognitive functions, in being subserved by specialized neural networks, under the guidance of innate mechanism .
The goal of this review is to illustrate the current condition of the neuropsychology of music and to describe different approaches to the study of the musical functions.
Modularity of Music Processing
Experimental data suggest that music, similarly to language, is a system of communication ruled by specific rules and an own syntax and that its comprehension is the result of a specific brain organization.
Music seems to have emerged spontaneously and very early in human evolution . Already in prehistoric societies music played an important role in interaction among cultures and in reinforcing the link between a mother and a child. Moreover, it seems that infants are sensitive to melodies and are able to distinguish different rhythms . This innate capability should be a precursor of the development of language . As far as these abilities are constant and common, they can be expected to have a fixed neural architecture . That is, brain implementation of music networks should be similar in the vast majority of humans, non-musicians and musicians alike. Support for the existence of such neural modules can be found in the observation of selective loss or remarkable sparing in the case of brain damage. Cases of selective sparing and of selective losses have been documented; vice versa there have been cases of impaired processing for other cognitive domains, with no impairment of music ability [6, 7]. Disease or brain injury can make people music savants while severely retarded , or maintain musicianship in the context of severe global decline . Conversely, brain injury can produce selective breakdown patterns of musical abilities [9, 10].
Language and music are sometimes considered two aspects of a high-order cognitive process. However, recent studies of acquired brain lesions show that the loss of musical faculties is not necessarily associated with the loss of verbal functions [11,12,13,14,15,16,17,18]. The opposite pattern, that is aphasia without compromission of musical functions, has been described in several cases [9, 10,19,20,21,22,23]. Selective impairments involving music processing show that the neural correlates of music are separated from the neural networks devoted to the recognition of spoken words and environmental sounds.
Support for the existence of a music-processing module can be found in reports of selective impairments in music recognition abilities after brain damage: such patients display no difficulty in recognizing environmental sounds or in understanding speech [14, 24, 25]. This selective impairment completes the double dissociation suggested by a previous study in which recognition of speech and environmental sounds was impossible, while recognition of melodies remained unimpaired [21, 26]. These dissociations (table 1) are incompatible with the claim that there is a single processing system responsible for the recognition of speech, music and environmental sounds. Rather, the evidence points to the existence of multiple mechanisms that are domain-specific. At least two separable systems of auditory recognition exist: one for speech and one for music. Recognition of other environmental sounds may be subserved by a specialized system as well [1,27,28,29].
The evidence that brain injury can interfere with the ability to recognize tunes that were once highly familiar to the patient has been known for more that a century [9, 30, 31]. Music recognition is a complex procedure that involves multiple processing components. Damage to one or many of these components produces a syndrome called music agnosia or amusia. ‘Amusia’ is a generic term used to designate acquired disorders of music perception, performance and reading or writing that are secondary to organic brain damage . This selective deficit could not be explained by a loss of hearing or a global cognitive retardation or a lack of exposure to music . Patients affected with amusia fail in discrimination and recognition tasks.
A brain injury can damage the motor or expressive functioning: for example the ability to sing, whistle or hum a tune (oral-expressive amusia); the ability to play an instrument (instrumental amusia or musical apraxia); the ability to write music (musical agraphia). Vice versa, the musical defect can affect the receptive dimension: the faculty to discriminate tunes (receptive or sensorial amusia); the ability to identify familiar songs (amnesic amusia); the ability to read music (musical alessia).
Discrimination and memorization are not the main reasons why people listen to music. These deficits can appear isolated or associated with other disorders (generally aphasia). There are several reports of selective impairments in music recognition abilities after brain damage. Such patients can no longer recognize melodies (presented without words) that were highly familiar to them before the onset of their brain damage. In contrast, they are able to recognize spoken lyrics, familiar voices and other environmental sounds. This condition is called ‘acquired amusia’ [11, 14, 15, 17, 18, 24, 25]. In ‘congenital amusia’, individuals suffer from lifelong difficulties with music but can recognize the lyrics of familiar songs even though they are unable to recognize the tune that usually accompanies them . Recent literature findings show that musical functions recruit neural mechanism in both cerebral hemispheres and also engage multiple brain regions in each hemisphere [16, 29].
The Model of Music Perception and Memory
Recognition of familiar melodies is immediate and easy for every human being. Despite it being apparently effortless, music processing is a complex procedure that involves multiple processing components, which can be selectively disrupted or spared. Memory is essential for enjoying and performing music and an emotive resonance is fundamental for a complete experience. These basic skills depend on the adequate functioning of multiple components. Therefore, it is essential to have a model that specifies the processing components that are involved, as well as their likely interactions.
Peretz and Coltheart  derived the functional architecture of music processing from case studies of specific music impairments in brain-damaged patients. In this model, a neurological anomaly could either damage a processing component (box) or interfere with the flow of information (arrow) between components (fig. 1). Peretz and Coltheart propose various music-processing modules, each of which is concerned with particular information processing operation that contributes to the overall system. The musical input modules are organized in two parallel and largely independent subsystems whose functions are to specify, respectively, the melodic content and the temporal content. In melody processing, two parameters appear to be functionally important: the particular interval between two succeeding notes, assumed to be processed locally, and the melodic contour (i.e. the succession of pitch directions), requiring global information processing [6, 31, 32]. It is relatively well established that the essential processing components of the melodic route lie in the right superior temporal gyrus [32,33,34] with possible connection with the right frontal areas [33, 35]. In comparison, the temporal dimension would include rhythm perception (i.e. the discrimination of durational values) by local, analytic strategies as well as the interpretation of meter (i.e. the temporal regularity or beat, corresponding to periodic alternation between strong and weak beats) via the global mechanism of perception . Support for this dual route comes from the observation of double dissociations between the processing of melodic and temporal information in music perception [29, 32]. Both the melodic and the temporal pathways send their respective outputs to either the musical lexicon (repertoire) or the emotion expression analysis component. The repertoire is conceived as a perceptual representation system that contains all the representations of the specific musical phrases to which one has been exposed during one’s lifetime. The same system also keeps record of any new incoming musical input.
The melodic route is conceived as having primacy for accessing stored music representations . The output of the musical lexicon can feed two different components, depending on task requirements: it can activate the lexical representations for the retrieval of the accompanying lyrics, or the associative memories for retrieving all sorts of non-musical information (the title of the musical excerpt, an episode related to the first hearing of the music concerned). In parallel with memory processes, but independently, the perceptual modules send their outputs to the emotion expression analysis component, allowing the listener to recognize and experience the emotion expressed by the music . It is assumed that the emotional pathway is isolable from the non-emotional analysis of music and can be selectively damaged. According to this model, Peretz  suggests that music recognition may be conceptualized as a two-stage process. That is, music agnosias may have either a perceptual melodic basis or a memory basis.
Music agnosia may be due to a failure to encode melodic information properly. Such a recognition deficit due to a perceptual defect falls into the class of apperceptive agnosias, and it is associated with right-sided infarct of the right superior temporal gyrus, where the essential processing components of the melodic route lie [32,33,34, 36]. The other form of music agnosias results from an isolated loss of memories for music. Such a disorder is known as associative agnosia and it is related to bilateral infarcts. The neural correlates of the memory component of the music recognition system are elusive. Learning and long-term retention of novel melodies seem to rely more on the integrity of the right than the left hemisphere . However, recognition of highly familiar music has been shown to depend more on the left hemisphere [37,38,39,40].
Studies about the Relationship between the Brain and the Musical Functions
The study of musical disorders dates back to the origin of neuropsychology. This long-standing interest in musical disorders results from the fascinating observation that musical function can be impaired or spared in a highly selective fashion. F. Gall , in 1825, was among the first to affirm the existence of a ‘musical organ’ in the human brain, that is a specific region devoted to music processing that could be selectively spared or disrupted following brain damage. Bouillaud  described the first series of cases in which various musical abilities were lost consequent to brain insult. In 1865, he reported the case of a brain-damaged musician who could sing, compose, and write music, even though he was unable to speak, to write and to read language.
The principal purpose of the studies on the amusia has been for very much time the search of the hemispheric dominance for music. In 1920, Henschen  published the first monograph about amusia, summarizing all the cases of amusia published up to his time and adding his personal case and drawing a parallel between language and music disturbance. In spite of some records that indicated a fundamental schism between music and language, the prevailing view in the early literature was that music is a function of the left hemisphere. Henschen, anyway, admitted that the right hemisphere could take over the singing faculty of the left hemisphere in case of brain damage. He also recognized that dominance for music is not rigid as for language and that the representation of the musical faculty is more uniformly distributed over both hemispheres .
The insistence on left hemisphere dominance, despite the contradictory evidence, can be explained in the light of the knowledge of the epoch. It was thought that the right hemisphere was ‘empty’ and that its only function was to take over some functions normally attributed to the left hemisphere after brain damage. Much of the early published work was focused on the problem of the cerebral localization. However, the patients were not tested in a systematic way: their premorbid musical ability was seldom taken into account, and the musical competence of the neurologists was generally not very high . The literature of more recent years contains a few well-documented cases of music damage or preservation in musicians and in brain-damaged patients.
Studies on Surgical and Vascular Patients
In the second half of the last century two typologies of patients were studied with standardized tests in order to analyze the relationships between the musical function and the brain in a more objective way: patients who had surgical excision of brain tissue for the relief of medically intractable epilepsy and patients with unilateral brain damage generally due to vascular disease.
Milner , in 1962, was the first to use a standardized battery for musical talent (the Seashore test) to investigate musical functions in a group of patients with intractable epilepsy, who underwent a temporal lobectomy. Patients were examined twice, before and after the temporal lobectomy. The test revealed that the left temporal lobectomy did not affect the performance, while the patients with right lobectomy showed a deficit in timbre perception and discrimination of two short musical sequences . A predominance of the right hemisphere for music processing was also found by Shankweiler  in surgical patients. In his experiment, he submitted a group of patients with temporal lobectomy to a dichotic melody, before and after the operation. Before the operation, subjects’ performance was not different from that of normal controls. Following the operation, patients with right lobectomy performed significantly worse than before, while no difference was found in the performance of the group with left temporal lobectomy.
The results of the studies on surgical patients [45,46,47] showed a clear predominance of the right hemisphere in music perception. Left-hemisphere-damaged patients were found to be impaired when verbal mediation was involved. Such data point out that the left hemisphere does not have a crucial role in perceptual tasks. It has, however, an evident function in the processing of familiar songs.
The first studies on vascular patients date back to 1969. Schulhoff and Goodglass  used a dichotic task to replicate in vascular patients Shankweiler’s results with surgical patients . Peretz  described a group of patients with unilateral cerebral lesion who were presented with the same/different classification task. Two melodies were played to each subject. For half of the trial the melodies were the same. When they differed, three conditions were possible: contour-violated, contour-preserved and melody transposed. Right-brain-damaged patients showed little impairment when contour was useful to the discrimination, as opposed to normal controls and left-brain-damaged patients. However, both brain-damaged groups were impaired on tasks requiring consideration of pitch interval structure. Peretz found evidence that music is not a monolithic entity that can be ascribed as a whole to one particular hemisphere, but rather a set of components that can be dissociated into different lateralization and patterns. Although the results of Peretz confirm that both hemispheres are involved in music processing, the precise cortical regions within each hemisphere that contribute to the processing of musical components could not be specified.
In a subsequent study, Liégeois-Chauvel and Peretz  tried to specify these neural regions and confirmed Peretz’s results with a group of epileptic patients who had undergone unilateral temporal cortectomy. In agreement with Peretz’s results, a right-sided cortectomy was found to be detrimental to the processing of both contour and interval information in the discrimination of melodies, while a left cortectomy caused an isolated deficit in interval processing. No isolated deficit could be observed in the contour condition, which was systematically associated with deficits in the interval condition. These data suggest that pitch and contour processing are not independent, since recognition of contour is a necessary prerequisite to a more detailed processing of melody. These results are consistent with the hierarchical principle of cooperation between the hemispheres put forward by Peretz : the right hemisphere primarily representing the melody in terms of its global contour and the left encoding local information.
Thus, recognition of contour should precede the processing of interval, yielding a two-stage processing cascade [31, 36, 49]. According to this principle, a right hemisphere lesion, by disrupting the proceeding subsystem required for representing the melody contour, deprives the intact left hemispheric structures of the anchorage points necessary for encoding interval information. Thus, unilateral brain damage in either hemisphere can affect the extraction of interval information. A similar dissociation of local (rhythm) and global (meter) processing in the temporal dimension has so far been elusive. While it has been assumed that temporal perception is based on a hierarchical system, i.e. that recognition of meter derives from intact rhythmic organization , others have discussed a model of separate levels of analysis [32, 34]. The latter view has been supported by Peretz , who demonstrated a dissociated deficit in rhythm perception in some patients in the absence of a meter perception deficit and by Liégeois-Chauvel et al.  who found the opposite pattern. In addition, no clear hemispheric preponderance of temporal processing has been found: deficits in rhythm perception have been described following the right hemisphere damage as well as the damage of the left side.
No firm conclusion can be drawn from these studies of musical perception in groups of surgical or vascular patients, except that musical function is not lateralized to one hemisphere to the same degree as language and praxis are. Both hemispheres contribute: the right has a major role, but the left hemisphere has an evident function in verbal processing of melodies and in the recognition of highly familiar songs [37,38,39,40].
The Dichotic Listening Technique
For many years, evidence from working with brain-damaged patients has been the only source of information for neuropsychological studies. Recently, the dichotic listening technique became a new tool to study the relationship between music and the brain in normal subjects. This technique was first used with verbal stimuli, resulting in REA (right ear advantage), i.e. left-hemisphere superiority . Vice versa, LEA (left ear advantage) implies the right-hemisphere superiority. The dichotic listening technique was also used for musical stimuli: Kimura’s  paradigm was a dichotic presentation of two melodies followed by binaural presentation of four foils, among which the subject had to recognize the two previously heard melodies. LEA was found for instrumental melodies as Shankweiler  did in brain-damaged patients. However, LEA for musical material is more evident than REA for verbal stimuli.
Gordon  presented digits, melodies and chord to 20 musicians and found REA for digit recognition and LEA for chords, but left ear dominance for melodic stimuli was not found. Based on these data, Gordon assumed that only the non-temporal holistic aspects of music are better processed by the right hemisphere, whereas the left hemisphere is dominant in the analysis of temporally patterned stimulation, and specialization for language is only a special case of this general function. In a later study, Gordon  modified the same melodies so that they differed by pitch or by rhythm. Results indicated LEA for chord recognition and REA for the melodies that differed in rhythm. No LEA was obtained for melodies differing in the pitch dimension only.
The dichotomy of language in the left and music in the right hemisphere is too simplistic. Gordon’s results, in fact, indicate that recognition of melodies on the basis of pitch patterns is a complex process that for some subcomponents depends on the right and for others on the left hemisphere. Cooperation coupled with specialization of the two hemispheres can also be seen in other dichotic listening studies.
Bartholomeus  found LEA for melody recognition and REA for letter sequence recognition and showed that hemispheric dominance can be manipulated by the task requirements and that the two hemispheres were able to independently process that component of a complex stimulus for which the respective hemisphere is dominant. Gates and Bradshaw  found out that the left hemisphere was faster and the right one more accurate in detecting rhythm and pitch changes. Moreover, this study showed that excerpts from familiar melodies were better recognized by the left hemisphere, whereas the right hemisphere had a major role in the recognition of unfamiliar melodies. Finally, the experimental evidence with the dichotic listening technique indicates a major, but not exclusive, role for the right hemisphere in music processing.
Thanks to the progress of neuropsychology and to the development of new techniques devoted to the study of the musical function, we cannot anymore accept the traditional dichotomy of language in the left and music in the right hemisphere. This simple viewpoint could not be held any longer, since 1974 when Bever and Chiarello  were able to demonstrate the influence of professional training on hemispheric lateralization during music processing. In a recognition task, the naive subjects showed the expected LEA while the musicians had REA. Bever and Chiarello suggested that musicians break down tonal sequences into their constituent elements and therefore analytically process the information through the left hemisphere.
In the following years, results of several brain imaging studies supported the idea that left or right superiority could depend on the nature of the procedures used to perform a task. A group of professional musicians was presented with three dichotic tasks of increasing difficulty . As expected, with increasing difficulty of tasks, a higher degree of left-hemisphere advantage was found, in view of its specialization for complex analytic processing. This tendency of lateralization to the left hemisphere in professional musicians has been observed in various studies, and it was shown in rhythmic as well as in melodic processing. It was ascribed to different cognitive strategies used by trained and untrained listeners . Professional musicians process music information in a more analytical way (left hemisphere) with respect to subjects without such experience.
Spontaneous musical performance, whether through singing or playing an instrument, can be defined as the immediate, on-line improvisation of novel melodic, harmonic, and rhythmic musical elements within a relevant musical context. Most importantly, the study of spontaneous musical improvisation may provide insights into the neural correlates of the creative process [60, 61]. Spontaneous artistic creativity is often considered one of the most mysterious forms of creative behavior, frequently described as occurring in an altered state of mind beyond conscious awareness or control .
Bengtsson et al.  used functional magnetic resonance imaging to investigate which brain regions are involved in free generation of responses in a complex creative behavior: musical improvisation. Eleven professional pianists participated in the study. Activated brain regions included the right dorsolateral prefrontal cortex, the presupplementary motor area, the rostral portion of the dorsal premotor cortex, and the left posterior part of the superior temporal gyrus.
To investigate the neural substrates that underlie spontaneous musical performance, Limb and Braun  examined improvisation in professional jazz pianists using functional magnetic resonance imaging. By employing two paradigms that differed widely in musical complexity, they found that improvisation (compared to production of overlearned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensory-motor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity.
Localization of the neural substrates underlying music information processing has been an enduring problem for more than a century, and the question of hemispheric specificity concerning the diverse components of music perception is still being discussed controversially. The miscellaneous manifestations of post-lesional musical deficits give rise to the assumption that musical information processing may, to a considerable degree, be based on a highly individual network. Musical functions do not show a clear hemispheric lateralization, and the neural substrates underlying local and global musical information processing present a cross-hemisphere heterogeneous and fragmented system . It seems that music processing is based on widely distributed, but locally specialized subsystems [24, 60] modulated by individual aspects of musicality and music experience . In addition, music perception is not exclusively based on specific music processing substrates: diverse generic cognitive functions are also likely to be used in varying degrees, e.g. attention, working memory, phonological loop and frontal functions. Another important aspect not taken into account in the studies of amusia is cerebral plasticity. In order to rule out relatively short-term cerebral plasticity changes, patients should be examined at an earlier stage and within a very limited time frame following their cerebrovascular accidents. Therefore, retest measurements would be necessary in order to evaluate the potential improvement of musical functions during the process of rehabilitation .
In order to make considerable advancements in the neuropsychology of music, the first requirement is to establish a detailed model of normal music processing, which can clarify which abilities in the musical domain are common to all human beings and which are specific to talented subjects. Progress in this direction depends not only on the advancements in the technique of the neuroimaging, but also on the fractionation that is operated in musical abilities.