top of page
Samuse.jpeg

Symposium

26-27 Januari 2026, Belval (LU)

Methodological Challenges and Perspectives in Multimodal Music Performance Research

Music performance is a complex, multimodal activity that relies on a highly coordinated interplay of cognitive, emotional, and motor processes. Fully understanding this dynamic act requires methodological approaches capable of capturing, analysing, and interpreting these dimensions. Psychophysiological and biomechanical methods, among others, offer valuable insights into the mechanisms underlying music performance; however, each technique introduces additional layers of complexity related to data acquisition, processing, and interpretation, while also raising challenges regarding ecological validity in experimental contexts. 

This two-day workshop brings together leading researchers with expertise in these methodological approaches to examine conceptual and methodological issues associated with integrating psychophysiological and biomechanical data, including advanced analytical techniques and computational modelling. In fostering dialogue across musicology, neuroscience, engineering, and computer science, the workshop aims to promote interdisciplinary collaboration and define future research directions. By addressing these challenges, the symposium seeks to advance our understanding of the complexities of music performance and support the development of integrated, ecologically valid research methodologies.

Keynote Speakers

Giacomo Novembre

Methodological challenges and innovations in the neuroscience of music, dance and social interaction

âž› abstract 

George Waddell

Decoding Performance: Capturing, Analysing, Evaluating, and Enhancing Performance in Live and Simulated Environments

âž› abstract

Laura Bishop

Capturing bodily expressivity and coordination in classical ensemble playing using eye-tracking and motion capture

âž› abstract

Saravana Duraisamy

Shining Light on the Musical Mind: Exploring Brain Activity in Real Performance with fNIRS

âž› abstract 

Ernest Kamavuako

Making Sense of Sensors: Methodological Considerations in Biomechanical and Physiological Sensor Fusion

âž› abstract 

Inès Chihi

When Sensors Learn to Fix Themselves for an AI Beyond Listening

âž› abstract

PhD Talks

Coming soon...

Preliminary Schedule

image.png

Abstracts

Giacomo Novembre

Methodological challenges and innovations in the neuroscience of music, dance and social interaction

​

Music and dance are widely recognized as human universals with essential social functions, yet neuroscience remains ill-equipped to investigate their biological foundations. To advance this research, I propose that neuroscience must (i) place greater emphasis on behaviour, developing methods to analyse its multidimensional complexity; (ii) integrate behavioural and neural approaches holistically; (iii) refine computational techniques to disentangle overlapping neural processes during ecologically valid behaviours; (iv) adopt a comparative perspective, spanning different ages and species; and (v) become more inclusive, incorporating active musical engagement in diverse populations, including laypeople, infants, and clinical groups. In this talk, I will present recent work from my lab that addresses these challenges, offering new insights into the neuroscience of music, dance, and social interaction. 

 

George Waddell

Decoding Performance: Capturing, Analysing, Evaluating, and Enhancing Performance in Live and Simulated Environments

 

Music performance is a complex act requiring a highly coordinated interplay of cognitions, decisions, and movements. To fully understand such performances requires multimodal approaches to capturing, analysing, interpreting, and applying these phenomena, each of which introduces further complexity. Introducing measurement techniques to a performance can interfere with the musicians’ attention, freedom of movement, and artistic process and therefore affect the ecological validity of the experiment, especially in live and pressured settings. Desired performance outcomes are highly individual and subjective, complicating attempts at analysis and generalisation. And the complexity of the data can make it difficult to translate findings back into knowledge that is genuinely useful for the musician. This session will survey research seeking to optimise performance by decoding performance where it happens. It will consider musicians’ attitudes towards and use of technologies in their practice, simulations that allow live performance to be studied in controlled settings, and prototype data visualisations which allow musicians to inform their performance and preparation in real time.  

Laura Bishop

Capturing bodily expressivity and coordination in classical ensemble playing using eye-tracking and motion capture 

​

Ensemble musicians must work collectively to coordinate expressive nuances and adapt to varying performance environments. Coordination in sound is paramount, but supported and enhanced by coordination in other modalities, such as visual gesturing and mutual eye gaze. This talk is about a program of research aiming to understand ensemble musicians’ expressive, communicative, and coordination abilities. The focus is on classical ensemble-playing, where musicians are guided by a score, but still must work together in real-time to resolve problems that arise mid-performance. A central question is: How do musicians collectively adapt to changing performance conditions? I am interested in the group processes that enable musicians to maintain coordination despite disruptions to their normal modes of interaction. My approach involves analysing bodily activity and sound output as musicians perform in different conditions and settings, and testing for changes in their expressive approach and coordination strategies.  

 

This talk is structured around three studies. The first study examines how advanced student pianists approach performing a new piece together after rehearsing it separately. We analysed head motion and variability in tempo and dynamics in solo and duet renditions of the same piece, and found that pianists tended to converge to a more predictable interpretation when playing together. The second study compared student and expert string quartets on the strength of their visual interaction and adaptability to disruptive playing conditions. We analysed head motion, eye gaze patterns, and pupil size, and found the expert quartet to be more coordinated overall and more resistant to conditions that disrupted their visual interactivity. The third study investigated temporal variability and synchrony in professional orchestral musicians’ expressive body sway, in rehearsal and concert settings, with and without the aid of a conductor. Our findings suggest that musicians maintained a similar expressive strategy in rehearsal and concert settings, and that the conductor had a coordinating effect on the intensity of their body motion. In the final part of the talk, I will discuss the methodological challenges that we have encountered as we have attempted to collect reliable body activity data from groups of musicians in laboratory and real-world conditions.  

​

Saravana Duraisamy

Shining Light on the Musical Mind: Exploring Brain Activity in Real Performance with fNIRS 

​

Musical performance is one of the most complex human activities, combining perception, movement, emotion, and creativity. Yet, studying how the brain supports these skills during real music-making has long been limited by movement constraints in traditional lab setups. Functional near-infrared spectroscopy (fNIRS) offers a powerful, movement tolerant window into brain activity, enabling researchers to measure how musicians think, feel, and adapt while they play. 

In this keynote, I will introduce the principles of fNIRS and demonstrate how it can be used to investigate musical skill acquisition, cognitive load, and emotional engagement in both novices and experts. Through case examples, I will highlight practical insights, methodological challenges, and future directions for combining fNIRS with motion and audio analysis. The goal is to show how “brain and body” measurements can enrich our understanding of musical performance in natural, dynamic contexts.

​​

Ernest Kamavuako

Making Sense of Sensors: Methodological Considerations in Biomechanical and Physiological Sensor Fusion 

​

Multimodal research offers powerful opportunities to study complex human activities by integrating signals from diverse sources, including biomechanical and physiological sensing, motion capture, inertial systems, and other wearable technologies. Yet the combination of modalities also introduces significant methodological challenges across every stage of the research process—from experimental design and data collection to analysis and interpretation. 

This keynote will draw on experiences from several domains where sensor fusion has been applied, including motion detection, fluid/food intake, and music performance research. Through these examples, I will reflect on how multimodal systems can enhance robustness and interpretability, but also how issues such as signal variability, alignment, and ecological validity—that is, how realistic and transferable our measurements are to real-world performance contexts—must be carefully managed. 

The contribution of this talk is to outline methodological principles of sensor fusion that can be transferred across domains—including music performance—helping researchers design, integrate, and interpret multimodal systems with greater reliability and relevance. In doing so, it connects engineering-based approaches to the SAMUSE project’s vision of understanding skill acquisition through multi-sensing and embodiment. 

​

Inès Chihi

When Sensors Learn to Fix Themselves for an AI Beyond Listening 

​

In practice, most sensing systems used in human–instrument interaction suffer from drift, noise, loss of calibration, hysteresis, motion artefacts, or partial sensor failure. When the sensing layer fails, the AI misinterprets performance, not because the model is wrong, but because the data is. 

This talk introduces a framework for sensor self-assessment and self-healing, in which the system continuously evaluates signal quality, detects faults, estimates uncertainty, and compensates or reconstructs degraded data in real time. Once sensing becomes reliable and multimodal, combining gesture, biomechanics, muscle activation, posture, timing and sound, AI can move beyond listening and deliver feedback that is physically aware, adaptive, and pedagogically meaningful. 

The result is a shift from audio-only “correct/incorrect” evaluation to a new class of learning systems that understand how a musician plays, not just what they play. 

​

Chihi
Wadell
Novembre
Bishop
Duraisamy
Kamavuako

© 2024 by Luc Nijs

bottom of page