Fine tuning emotion in music

What musician do with their instrument to move us

Fine tuning emotion in music: What musician do with their instrument to move us.

Why does music move us?  What do musicians actually do that causes us for feel joy, sadness or marvel at the beauty of a performance?

Researchers have puzzled over this question for years, and much progress has been made.

We know that when a musician plays faster and louder, the listener will perceive the piece as more exciting.

A person listening to musicWhen a piece is played with high pitches of short duration, it will sound happier.  Much of this ground work has been done by a member of our team, and Swedish researchers such as Paitrk Juslin.

We also know that, usually, slowing down slightly toward the end of a section of a piece (a musical ‘phrase’) sounds expressive.

Musical features are what we hear when music is made—how loud/soft, fast/slow, high/low and short/long to play notes.  But what we know much less about is what the players are actually doing to their instrument to produce these musical features and emotional reactions.

 

Part of the reason is because musicians exercise such fine control over their instruments that it is difficult to measure what they are doing with sufficient accuracy.  And sometimes attempts to track a musician’s actions on the instrument end up interfering with their playing.  You might wonder: what would happen if the musician played with no expression. Some years ago, a UNSW team made a machine that played the clarinet. The aim there was to investigate quantitatively the physics of playing the instrument.

More recently, our team has been investigating the link between what musicians (this time with real clarinettists!) are doing while playing and how that contributes to the musical features we hear, and thus the emotional response.

 

A COOL CAT FOR THE CLARINET

 

Person playing the clarinetHow the player shapes their mouth around the mouthpiece, how much pressure they use with their lips and teeth, the tongue position, and blowing speed are all analysed.  The newly developed equipment also includes motion-tracking video to measure the angle at which the clarinet is held, among various other parameters.  The software used for this high resolution equipment is now publicly available:  The Music Instrument Performance Capture and Analysis Toolbox, or MIPCAT .

 

 

 

 

Sample capture of time series for 4 performances of 2 (expert) players. From top to bottom, the score (transposed for the instrument), the playing frequency, sound RMS amplitude, blowing pressure, normalized distance of the reed to the lay of the mouthpiece, angle of the clarinet and covered area of the mouthpiece. The photographs provide two extreme examples of mouthpiece covering (bottom) and clarinet angle (on the right-hand side).

 

The MIPCAT brings together high resolution technology that allows researchers to closely examine player interaction with the instrument.  Among many questions the toolbox can be used to answer, we are now starting to discover the missing part of the emotion in music puzzle.

 

The communication and feedback chains. Data analysed in bold.

 

Professional clarinettists were ‘hooked up’ to the MIPCAT and asked to play pieces of music with different expressions, including happiness, sadness, ‘expressively’ and ‘deadpan’ (playing with an expressionless manner).

The research team then took recordings of those performances and examined whether listeners could detect the intended emotions.

Some expected results for musical features replicated previous studies (louder playing linked to more exciting emotions, etc.).  Novel findings are emerging in terms of musician manipulation of the instrument. 

 

AS-IF ANGRY PLAYING

An interesting finding is that to achieve an angry sound, the player uses stronger bite force.  This new finding suggests that, although musicians use great skill in achieving expressive playing, some of the bases for achieving an expressive goal may be linked to embodied emotions one would experience during the actual (not just musical) emotion.  The embodiment idea is that the body ‘experiences’ and displayed emotion in universally understood ways.  Playing music to communicate a particular emotion exploits existing means of communication.  The stronger bite used in ‘angry’ playing parallels the tension one feels in the throat when angry, and the impact this has on the voice.  We can detect when someone is angry from their voice, which can also be produced by increased muscle tension in the vocal apparatus.  Emotions in real life are sometimes expressed in multiple sensory modes, through visual, auditory and kinaesthetic messages, such as when an angry elephant is charging - they make themselves look bigger, run toward the source that they are seeking to frighten, and make loud sounds.  David Huron observed that some emotions in music are also reflected by synchronised, multi-sensory signals, in addition to the sound produced by the musician.  This could explain why player bite is stronger during production of angry sounds. 

 

The musical sound produced on the clarinet seems to be a product of facial and vocal display patterns during real-life episodes of anger!

 

charging elephant

A charging elephant also makes gestures and sounds that reinforce the message they are conveying.

 

Our measurements suggest that the player's lips/teeth are acting on the clarinet in a manner analogous to the vocal tract during speech (imagine speaking to someone when you are tense or angry.  What happens to your throat?).  The player may be transferring some of the angry muscle tension into the embouchure on the clarinet mouthpiece.  The close-up appearance of the player’s action on the instrument is ‘as if’ they are angry.  This may not usually be visible to the naked eye, but it can affect the sound.

With results from the MIPCAT, many more findings about what musicians are doing at a very fine level are possible, including how those influence our perception of music.  The level of precision in timing will give insights rarely available to even the finest musicians and teachers.  Mind you, musicians are very good in describing what they need to do to their instrument to achieve a particular expressive goal.

This research was made possible with the support of the Australian Research Council.  It is a cross disciplinary collaboration with the Music Acoustics group and UNSW, the Empirical Musicology Laboratory also at UNSW, and WSU.  The main researchers involved in the project are Drs. Andre Goios Borges De Almeida, Weicong Li John Smith (Emeritus), Emery Schubert and Joe Wolfe (Emeritus).

For media enquiries, contact: Emery Schubert, e.schubert@unsw.edu.au, +61-2-9385-6808. 

November, 2024

Project collaborators: External


Videos

Footage from previous work by members of the Music Acoustics Group at UNSW made when they had developed the robot clarinet (part of a separate project), here shown playing a duet with Dr. Deborah DeGraaff. (Footage taken from Emeritus Prof. Joe Wolfe's YouTube channel: https://www.youtube.com/@JoeWolfe).
Robot clarinettist and human play a duet