Dr Renee Timmers is collaborating with Austrian colleagues to investigate how musicians communicate with each other while playing a duo
In contrast to what may perhaps be expected, musical communication is at the heart crossmodal, not unimodal. It is true that audition is central to typical musical experiences (e.g. listening over headphones), and musical sounds receive meaning in reference to each other (e.g. one tone forms the bass with respect to a higher melody tone). Nevertheless, when music is acoustically produced and when musicians perform together, music is as much a physical and visual activity as an auditory one. Moreover, musical sounds are imbued with associations with non-auditory phenomena as are sounds in general. For example, soft sounds may be perceived as small and far away in contrast to loud sounds that are heard as large and nearby.
Previous research has on the one hand demonstrated the relevance of visual information for the interpretation and evaluation of musical performance as expressive. Depending on musicians’ movements, visual appearance and facial expressions, a performance is perceived as more or less expressive. An extreme example showed that evaluations of visual appearance were a better predictor of winners of a piano competition than evaluations of auditory recordings. On the other hand, research has also shown that for synchronisation between pairs of musicians, it is not necessary to see each other. Being able to hear the other and oneself is sufficient.
Keeping the focus on pairs of musicians, I’ll aim to further examine the role of cross-modal information for inter-performer communication and to clarify its neural and cognitive basis. The focus will be on the communication of a pattern of accentuation, which can be strong (large difference between accented and unaccented tones) or weak (small difference between accented and unaccented tones). We will investigate the immediacy and automaticity of following changes in accentuation pattern of a co-performer, and its dependence on the type of visual information presented. Does a performer follow visual gestures from a co-performer equally strongly when playing a different instrument or the same instrument? And does a performer follow – and integrate into sound production – visual gestures when they are abstracted from the actual performance of sounds on an instrument? In other words, is the following of gestures dependent on motor-expertise or reliant on domain-general correspondences between visual and auditory information? Finally, what is the role of the right intraparietal cortex for this process, as it has been shown to be relevant for the cross-modal binding of information presented to different senses?
This International Academic Fellowship will allow me to realise this project in collaboration with Australian colleagues who are experts in investigating joint action in musical performance and who have previously employed neural stimulation (Transcranial Magnetic Stimulation) to investigate the role of particular brain areas in music performance. The project contributes to build a science of multimodal rather than unimodal cognition of music and contributes more generally to our understanding of the neurological and cognitive underpinnings of nonverbal communication.