Psychology

Psychology

Psychology

Psychology

Performance, Set, and Setting - Music Experience

Performance, Set, and Setting - Music Experience

Performance, Set, and Setting - Music Experience

Performance, Set, and Setting - Music Experience

music performance set setting
music performance set setting
music performance set setting
bg pattern
bg pattern
bg pattern

Introduction

The musical experience is influenced by several factors, including auditory and visual cues and the set and setting in live performances. Talented performers consciously or unconsciously understand how these factors interact and use them to improve the reception of performances and the effectiveness of emotional communication. A performer's effective use of auditory and visual cues explains up to 70% of the variance in the emotion perceived from listening. And while performers have individualized codes, thanks to non-musical life experiences, listeners can flexibly combine these cues to arrive at the intended emotional response. A listener's code doesn't need to explicitly match the performers for the emotion to be accurately transmitted. Humans appear hardwired to understand the visual components of music even when they can’t actually see the visual component. Essentially, we can feel the emotion and imagine the performer’s facial expressions and gestures when listening to music. Performers that evoke these visual cues during recording will more effectively transmit emotion by tapping into the human hardwiring for empathy. When humans listen to someone experiencing an emotion, we feel the emotion as if it originated within ourselves. Beyond visual and auditory cues, the set and setting play an integral part in live performance reception. But the setting, including the venue and its relative comfort, can add environmental ‘contamination’ to the reception of a performance. Performers cannot control the audiences mood or personal likes and dislikes. Yet there’s something special about a live performance, the combined audio-visual experience within a concert venue that leads to higher satisfaction and enjoyment of the performance. ‍

What are Cues?

Research has demonstrated that performers can communicate specific emotions to an audience, but the nature of this mechanism has largely been ignored. Performers express feelings through cues, which then affect a listener's judgment of the emotional expression conveyed by the performance. The function of these cues is to enhance the emotional impact of the musical performance as well as clarify the content structure. In short, a performer is usually telling a story through lyrics and music, that are meant to induce a specific emotion in listeners. Performers make this happen with cues.

Performers combine two types of cues, visual and auditory when creating a musical experience. The acoustic cues allow performers to add their unique style or code to a performance. But watching a performer, it seems clear that the performer also tries to communicate unspoken information to listeners through the use of gestures and facial expressions.

Auditory Cues

The first cues to be discussed are auditory, including tempo, sound level, articulation variability, and spectrum. These cues seem hardwired into humanity, as even young children can evoke these variables during singing to express specific emotions. But among performers, there appears to be a large range in the code for expressing emotion. How can emotional expression be successful without a somewhat universal code between listeners and performers?

Juslin (2000) crafted a study around performers' utilization of these cues to better examine the role of these cues and how listeners can effectively use them even with different cue codes. The study asked three guitarists to perform three short melodies to communicate four universal emotions: sadness, happiness, anger, and fear.

Investigating Auditory Cues

Juslin applied the Lense Model Equation (LME) to relate the listeners and performers. The LME equation outlined two key variables for measuring cue utilization:

  • Achievement – how accurately the performer’s intention is translated to the listener’s emotional judgment/experience

  • Matching – how closely the performer’s code of cues matches the listeners

Auditory Cues music

Juslin's hypothesis on the success of emotional communication using auditory cues was correct. All emotions were effectively communicated by the guitarists through auditory cues. Even within the same melodies, the three guitarists could create a distinction between sadness, fear, happiness, and anger.  

Anger was the most consistently identified by listeners across all three guitarists. The achievement for anger was significantly higher than fear, sadness, and happiness. While fear, sadness, and happiness lagged behind anger, they displayed no significant differences in achievement rates. The similarities between listener and performer utilization of cues, known as matching, followed the same ranking order as achievement: anger, fear, happiness, and sadness.

Of the five cues Juslin focused on, the listeners and performers shared the most code similarities in sound level and articulation. Sound level and articulation may be more hardwired into the human understanding of emotion than other cues. Although listeners emphasized the importance of tempo, performers were more likely to cite articulation as the most important.

Achievement was high across the board. The Lens Model Equation revealed that the performer's expressive intention could explain 70% of the variance in the listener's perceptions. This means listeners are highly influenced by a performer’s cues. The study results suggest that there is no pressure to mimic a specific expressive code because the range of cues works together to influence a listener's judgment. This means two performers can use different cue utilization strategies and still reach a relatively similar level of achievement. Performers can develop unique performance styles because a listener's brain combines cues flexibly. Again these cues appear to have an underlying empathetic element that is a universal human experience.

Origin of Cue Utilization

Juslin suggests that the origin of the nonverbal code that performers use to elicit specific emotions in listeners is related to the same brain programming for the vocal expression of emotions. Listeners and performers have broadly consistent associations between specific cues with specific emotions:

  • Anger – fast tempo, legato articulation, small articulation variability, and very high sound level

  • Happiness – fast tempo, staccato articulation, high articulation variability, high sound level

  • Sadness – slow tempo, legato articulation, small articulation variability, and low sound level

  • Fear – slow tempo, staccato articulation, high articulation variability, and very low sound level

Performers use the same code when performing that is used during vocal expression. This indicates there is an intimate relationship between the human voice and music. Juslin further theorizes that cue utilization and code establishment begin with the relationship between mother and infant. This lifelong process is influenced by extramusical life experiences, which might explain the wide variability in differences in cue utilization.  

Visual Cues

Beyond auditory cues, visual cues are another component of the musical experience contributing to emotional expression. According to Thompson, Graham, and Russo, communication in music extends beyond sounds and involves continuously changing and meaningful use of visual cues. These visual cues include body movements, facial expressions, and hand gestures.

Before the introduction of the radio and gramophone, musical performances were historically experienced as combined audio-visual experiences. Thompson, Graham, and Russo wanted to examine the use of visual signals in performances to understand better how they impact the listener's perception of the performance.

Modes, Genres, and Medium

The researchers suggest that musical performances must be examined through three influencing components: mode, genre, and medium. Each of these components contributes to the final perception of a musical performance. You can consider these three components through a funnel, with medium influencing the final emotional experience to a lesser degree than genre and genre influencing to a lesser degree than mode.

music modes genre medium

Medium

The medium is the channel through which the performer communicates to the listener. In the past, the medium was always through direct experience, but today musical mediums include television and streaming. A given medium can accommodate a wide range of genres, although the technical properties of certain mediums may be better suited to some genres than others.

Genre

Genre describes a conventional category in which music is separated based on patterned interactions and shared characteristics. Genres are identified by a specific format, style, and often content. For example, country music is music that includes components like ballads, dance songs, folk lyrics, string instruments, banjos, guitars, and fiddles. Genres can be divided into subgenres for further resolution of a specific musical sound and experience. Multiple characteristic modes usually constitute a given genre.

Mode

Mode describes the specific cues used by a performer, including tone of voice, gestures, word choice, facial expression, tempo, and many more. Mode is the particular way through which performers texture and form emotional expression.

The fact that Mode has the highest-level of influence is supported by research into auditory cues. The performer has a much greater impact on the reception of the music than previously thought because humans are conditioned to react empathetically to emotional cues.

Facial Expressions, Gestures, and Body Movements

The study observed performances from two well-known performers, B.B. King and Judy Garland. These performers used unique visual cues to contribute to the overall emotional communication of a musical piece. Visual cues can be separated into four categories:

  • Emblems – body movements or signaling that translates to a specific verbal message within a culture, like the peace sign

  • Illustrators – body movements for clarification or emphasis, such as pointing

  • Regulators – body movements that regulate the pace and content, like eye contact and head nods

  • Affective Displays – facial expressions that communicate the emotional state of the content, such as smiling or frowning

B.B King emphasized dissonance using affect displays, frequently through a pained expression, closed eyes, and head shaking. Audience members interpret these facial expressions and gestures to mean that B.B. King is struggling against difficult emotions and reflecting upon life experiences. King also used these cues to indicate which passages were challenging to play, signaling concentration and which were satisfying to play, signaling enjoyment and ease. King's body movements reflected the large-scale structure of the music, as he exaggerated his movements during the climatic moments.

music artist expression

Judy Garland incorporated visual cues throughout her performances that closely match the lyrical content. When singing lyrics with negative emotions, like 'I was lost,' she used a swimming motion to elicit images of her lost at sea and searching for a way out. During the word 'tossed,' she added a tossing gesture. Garland used a rhythmic illustrator when singing 'for love came just in time' by snapping her fingers. Garland used her whole body in performances, walking towards the crowd or camera to highlight musical changes and thrusting her hand forward to indicate resolution at the end of a performance.

Interestingly musicians use these same facial expressions, gestures, and body movements in closed recordings. It seems that visual cues are an integral part of communicating emotional expression through music, regardless of whether those cues are visible to listeners or not. This coincides closely with the use of facial expressions and gestures during speech. Human brains can pickup on visual cues, without a direct visual representation, because we are hardwired for empathy. When a performer channels an emotion, their audio and visual combined into a powerful inducer of emotions. Even when half of the experience is missing, the listener’s can hone in on the performer’s body movements and facial expressions.

The Reemergence of the Visual Dimension

Listeners are highly sensitive to the emotional content of music, and performers use various techniques, including visual cues, to express that emotional meaning. Between the 1920s and 1990s, music transitioned from a combined visual and aural experience to a solely aural experience. But in the last 20 years, this trend seems to be reversing.

In 1981 MTV started streaming music videos 24 hours a day, a critical success. Then in the early 2000s, YouTube provided another way for musicians to display their performances with audio and visual information in the form of on-demand music videos. Today the visual dimension of music has been reintroduced through new technologies like TikTok. Performers and content creators create incredible choreography, story-telling, and audio-visual experiences for people worldwide. The form has certainly shortened, but the impact is still substantial. The visual aspects of music performance may be more influential today than at any other time in history.

Set and Setting

In live performance situations, there is an added level of complexity to the overall judgment of listeners. Audience members experience the aural and visual components of a musical performance with the additional context of set and setting. Thompson wanted to understand further how a listener's perception of performance is affected by set, setting, and the quality of the performance itself.

He asked actual audience members attending a performance of Lutoslawki's Mi-Part and Chopin's Piano Concerto No. 1 in E minor, Op. 11, to rate the performance along the following dimensions:

  • Perceived quality of the performance

  • Linking for and familiarity with the music

  • Emotional response to the music

  • Satisfaction with the concert hall

  • Enjoyment of performance as a whole

He used a real audience because he wanted reactions from individuals that explicitly chose to attend the performance and wanted to be there. Ninety-one participants fully completed the study form and were entered into the study.

Setting Contamination

A performer does not have total control over performance perception and final judgment. Performers control the quality of the performance, although how the quality is perceived depends on the level of music training of audience members. Performers also have control over the specific visual and auditory cues they incorporate into their performance.

music setting

But performers have little influence over the mood of audience members, their specific musical tastes, the perception of the music venue, the comfort of the music venue, and a host of other factors. In this way, live performances are susceptible to 'contamination' by non-musical factors.

Affective Response is the Most Important Factor

Thompson found that the perceived quality of the performance was just one factor contributing to the overall enjoyment of a musical performance. Surprisingly, audience members could clearly distinguish their overall affective response from the quality of the performance. This suggests that performance quality is not a key factor in how well a performance is received or the success of emotional communication. The relative affective response among audience members was the most significant factor in the overall enjoyment of the performance.

Thompson had theorized that Chopin would be better liked overall because Chopin was a much higher profile composer than Lutoslawski. He believed familiarity with the piece or artist influenced the overall affective response to the performance. Instead, the study revealed little statistical significance between the overall affective response and previous familiarity.

How Performance, Set, and Setting Combine

Research by Juslin has revealed the innate ability of listener’s to flexibly understand a performer’s unique auditory cues. A singular code does not need to be shared by everyone, because there is enough overlap and natural empathy, that listener’s understand the performer’s emotional intent. This same idea extends into visual cues, where facial expressions, gestures, and body movements are incorporated into a performance to provide important non-auditory information. Yet amazingly, humans understand the associated visual cues of a musical piece even without seeing the performer. The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues. Adding further influence into the equation, in the form of set and setting, doesn’t change the overall experience for audience members. While the setting can introduce some non-musical contamination, the affective response is still the leading variable affecting the overall performance judgement.

Introduction

The musical experience is influenced by several factors, including auditory and visual cues and the set and setting in live performances. Talented performers consciously or unconsciously understand how these factors interact and use them to improve the reception of performances and the effectiveness of emotional communication. A performer's effective use of auditory and visual cues explains up to 70% of the variance in the emotion perceived from listening. And while performers have individualized codes, thanks to non-musical life experiences, listeners can flexibly combine these cues to arrive at the intended emotional response. A listener's code doesn't need to explicitly match the performers for the emotion to be accurately transmitted. Humans appear hardwired to understand the visual components of music even when they can’t actually see the visual component. Essentially, we can feel the emotion and imagine the performer’s facial expressions and gestures when listening to music. Performers that evoke these visual cues during recording will more effectively transmit emotion by tapping into the human hardwiring for empathy. When humans listen to someone experiencing an emotion, we feel the emotion as if it originated within ourselves. Beyond visual and auditory cues, the set and setting play an integral part in live performance reception. But the setting, including the venue and its relative comfort, can add environmental ‘contamination’ to the reception of a performance. Performers cannot control the audiences mood or personal likes and dislikes. Yet there’s something special about a live performance, the combined audio-visual experience within a concert venue that leads to higher satisfaction and enjoyment of the performance. ‍

What are Cues?

Research has demonstrated that performers can communicate specific emotions to an audience, but the nature of this mechanism has largely been ignored. Performers express feelings through cues, which then affect a listener's judgment of the emotional expression conveyed by the performance. The function of these cues is to enhance the emotional impact of the musical performance as well as clarify the content structure. In short, a performer is usually telling a story through lyrics and music, that are meant to induce a specific emotion in listeners. Performers make this happen with cues.

Performers combine two types of cues, visual and auditory when creating a musical experience. The acoustic cues allow performers to add their unique style or code to a performance. But watching a performer, it seems clear that the performer also tries to communicate unspoken information to listeners through the use of gestures and facial expressions.

Auditory Cues

The first cues to be discussed are auditory, including tempo, sound level, articulation variability, and spectrum. These cues seem hardwired into humanity, as even young children can evoke these variables during singing to express specific emotions. But among performers, there appears to be a large range in the code for expressing emotion. How can emotional expression be successful without a somewhat universal code between listeners and performers?

Juslin (2000) crafted a study around performers' utilization of these cues to better examine the role of these cues and how listeners can effectively use them even with different cue codes. The study asked three guitarists to perform three short melodies to communicate four universal emotions: sadness, happiness, anger, and fear.

Investigating Auditory Cues

Juslin applied the Lense Model Equation (LME) to relate the listeners and performers. The LME equation outlined two key variables for measuring cue utilization:

  • Achievement – how accurately the performer’s intention is translated to the listener’s emotional judgment/experience

  • Matching – how closely the performer’s code of cues matches the listeners

Auditory Cues music

Juslin's hypothesis on the success of emotional communication using auditory cues was correct. All emotions were effectively communicated by the guitarists through auditory cues. Even within the same melodies, the three guitarists could create a distinction between sadness, fear, happiness, and anger.  

Anger was the most consistently identified by listeners across all three guitarists. The achievement for anger was significantly higher than fear, sadness, and happiness. While fear, sadness, and happiness lagged behind anger, they displayed no significant differences in achievement rates. The similarities between listener and performer utilization of cues, known as matching, followed the same ranking order as achievement: anger, fear, happiness, and sadness.

Of the five cues Juslin focused on, the listeners and performers shared the most code similarities in sound level and articulation. Sound level and articulation may be more hardwired into the human understanding of emotion than other cues. Although listeners emphasized the importance of tempo, performers were more likely to cite articulation as the most important.

Achievement was high across the board. The Lens Model Equation revealed that the performer's expressive intention could explain 70% of the variance in the listener's perceptions. This means listeners are highly influenced by a performer’s cues. The study results suggest that there is no pressure to mimic a specific expressive code because the range of cues works together to influence a listener's judgment. This means two performers can use different cue utilization strategies and still reach a relatively similar level of achievement. Performers can develop unique performance styles because a listener's brain combines cues flexibly. Again these cues appear to have an underlying empathetic element that is a universal human experience.

Origin of Cue Utilization

Juslin suggests that the origin of the nonverbal code that performers use to elicit specific emotions in listeners is related to the same brain programming for the vocal expression of emotions. Listeners and performers have broadly consistent associations between specific cues with specific emotions:

  • Anger – fast tempo, legato articulation, small articulation variability, and very high sound level

  • Happiness – fast tempo, staccato articulation, high articulation variability, high sound level

  • Sadness – slow tempo, legato articulation, small articulation variability, and low sound level

  • Fear – slow tempo, staccato articulation, high articulation variability, and very low sound level

Performers use the same code when performing that is used during vocal expression. This indicates there is an intimate relationship between the human voice and music. Juslin further theorizes that cue utilization and code establishment begin with the relationship between mother and infant. This lifelong process is influenced by extramusical life experiences, which might explain the wide variability in differences in cue utilization.  

Visual Cues

Beyond auditory cues, visual cues are another component of the musical experience contributing to emotional expression. According to Thompson, Graham, and Russo, communication in music extends beyond sounds and involves continuously changing and meaningful use of visual cues. These visual cues include body movements, facial expressions, and hand gestures.

Before the introduction of the radio and gramophone, musical performances were historically experienced as combined audio-visual experiences. Thompson, Graham, and Russo wanted to examine the use of visual signals in performances to understand better how they impact the listener's perception of the performance.

Modes, Genres, and Medium

The researchers suggest that musical performances must be examined through three influencing components: mode, genre, and medium. Each of these components contributes to the final perception of a musical performance. You can consider these three components through a funnel, with medium influencing the final emotional experience to a lesser degree than genre and genre influencing to a lesser degree than mode.

music modes genre medium

Medium

The medium is the channel through which the performer communicates to the listener. In the past, the medium was always through direct experience, but today musical mediums include television and streaming. A given medium can accommodate a wide range of genres, although the technical properties of certain mediums may be better suited to some genres than others.

Genre

Genre describes a conventional category in which music is separated based on patterned interactions and shared characteristics. Genres are identified by a specific format, style, and often content. For example, country music is music that includes components like ballads, dance songs, folk lyrics, string instruments, banjos, guitars, and fiddles. Genres can be divided into subgenres for further resolution of a specific musical sound and experience. Multiple characteristic modes usually constitute a given genre.

Mode

Mode describes the specific cues used by a performer, including tone of voice, gestures, word choice, facial expression, tempo, and many more. Mode is the particular way through which performers texture and form emotional expression.

The fact that Mode has the highest-level of influence is supported by research into auditory cues. The performer has a much greater impact on the reception of the music than previously thought because humans are conditioned to react empathetically to emotional cues.

Facial Expressions, Gestures, and Body Movements

The study observed performances from two well-known performers, B.B. King and Judy Garland. These performers used unique visual cues to contribute to the overall emotional communication of a musical piece. Visual cues can be separated into four categories:

  • Emblems – body movements or signaling that translates to a specific verbal message within a culture, like the peace sign

  • Illustrators – body movements for clarification or emphasis, such as pointing

  • Regulators – body movements that regulate the pace and content, like eye contact and head nods

  • Affective Displays – facial expressions that communicate the emotional state of the content, such as smiling or frowning

B.B King emphasized dissonance using affect displays, frequently through a pained expression, closed eyes, and head shaking. Audience members interpret these facial expressions and gestures to mean that B.B. King is struggling against difficult emotions and reflecting upon life experiences. King also used these cues to indicate which passages were challenging to play, signaling concentration and which were satisfying to play, signaling enjoyment and ease. King's body movements reflected the large-scale structure of the music, as he exaggerated his movements during the climatic moments.

music artist expression

Judy Garland incorporated visual cues throughout her performances that closely match the lyrical content. When singing lyrics with negative emotions, like 'I was lost,' she used a swimming motion to elicit images of her lost at sea and searching for a way out. During the word 'tossed,' she added a tossing gesture. Garland used a rhythmic illustrator when singing 'for love came just in time' by snapping her fingers. Garland used her whole body in performances, walking towards the crowd or camera to highlight musical changes and thrusting her hand forward to indicate resolution at the end of a performance.

Interestingly musicians use these same facial expressions, gestures, and body movements in closed recordings. It seems that visual cues are an integral part of communicating emotional expression through music, regardless of whether those cues are visible to listeners or not. This coincides closely with the use of facial expressions and gestures during speech. Human brains can pickup on visual cues, without a direct visual representation, because we are hardwired for empathy. When a performer channels an emotion, their audio and visual combined into a powerful inducer of emotions. Even when half of the experience is missing, the listener’s can hone in on the performer’s body movements and facial expressions.

The Reemergence of the Visual Dimension

Listeners are highly sensitive to the emotional content of music, and performers use various techniques, including visual cues, to express that emotional meaning. Between the 1920s and 1990s, music transitioned from a combined visual and aural experience to a solely aural experience. But in the last 20 years, this trend seems to be reversing.

In 1981 MTV started streaming music videos 24 hours a day, a critical success. Then in the early 2000s, YouTube provided another way for musicians to display their performances with audio and visual information in the form of on-demand music videos. Today the visual dimension of music has been reintroduced through new technologies like TikTok. Performers and content creators create incredible choreography, story-telling, and audio-visual experiences for people worldwide. The form has certainly shortened, but the impact is still substantial. The visual aspects of music performance may be more influential today than at any other time in history.

Set and Setting

In live performance situations, there is an added level of complexity to the overall judgment of listeners. Audience members experience the aural and visual components of a musical performance with the additional context of set and setting. Thompson wanted to understand further how a listener's perception of performance is affected by set, setting, and the quality of the performance itself.

He asked actual audience members attending a performance of Lutoslawki's Mi-Part and Chopin's Piano Concerto No. 1 in E minor, Op. 11, to rate the performance along the following dimensions:

  • Perceived quality of the performance

  • Linking for and familiarity with the music

  • Emotional response to the music

  • Satisfaction with the concert hall

  • Enjoyment of performance as a whole

He used a real audience because he wanted reactions from individuals that explicitly chose to attend the performance and wanted to be there. Ninety-one participants fully completed the study form and were entered into the study.

Setting Contamination

A performer does not have total control over performance perception and final judgment. Performers control the quality of the performance, although how the quality is perceived depends on the level of music training of audience members. Performers also have control over the specific visual and auditory cues they incorporate into their performance.

music setting

But performers have little influence over the mood of audience members, their specific musical tastes, the perception of the music venue, the comfort of the music venue, and a host of other factors. In this way, live performances are susceptible to 'contamination' by non-musical factors.

Affective Response is the Most Important Factor

Thompson found that the perceived quality of the performance was just one factor contributing to the overall enjoyment of a musical performance. Surprisingly, audience members could clearly distinguish their overall affective response from the quality of the performance. This suggests that performance quality is not a key factor in how well a performance is received or the success of emotional communication. The relative affective response among audience members was the most significant factor in the overall enjoyment of the performance.

Thompson had theorized that Chopin would be better liked overall because Chopin was a much higher profile composer than Lutoslawski. He believed familiarity with the piece or artist influenced the overall affective response to the performance. Instead, the study revealed little statistical significance between the overall affective response and previous familiarity.

How Performance, Set, and Setting Combine

Research by Juslin has revealed the innate ability of listener’s to flexibly understand a performer’s unique auditory cues. A singular code does not need to be shared by everyone, because there is enough overlap and natural empathy, that listener’s understand the performer’s emotional intent. This same idea extends into visual cues, where facial expressions, gestures, and body movements are incorporated into a performance to provide important non-auditory information. Yet amazingly, humans understand the associated visual cues of a musical piece even without seeing the performer. The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues. Adding further influence into the equation, in the form of set and setting, doesn’t change the overall experience for audience members. While the setting can introduce some non-musical contamination, the affective response is still the leading variable affecting the overall performance judgement.

Introduction

The musical experience is influenced by several factors, including auditory and visual cues and the set and setting in live performances. Talented performers consciously or unconsciously understand how these factors interact and use them to improve the reception of performances and the effectiveness of emotional communication. A performer's effective use of auditory and visual cues explains up to 70% of the variance in the emotion perceived from listening. And while performers have individualized codes, thanks to non-musical life experiences, listeners can flexibly combine these cues to arrive at the intended emotional response. A listener's code doesn't need to explicitly match the performers for the emotion to be accurately transmitted. Humans appear hardwired to understand the visual components of music even when they can’t actually see the visual component. Essentially, we can feel the emotion and imagine the performer’s facial expressions and gestures when listening to music. Performers that evoke these visual cues during recording will more effectively transmit emotion by tapping into the human hardwiring for empathy. When humans listen to someone experiencing an emotion, we feel the emotion as if it originated within ourselves. Beyond visual and auditory cues, the set and setting play an integral part in live performance reception. But the setting, including the venue and its relative comfort, can add environmental ‘contamination’ to the reception of a performance. Performers cannot control the audiences mood or personal likes and dislikes. Yet there’s something special about a live performance, the combined audio-visual experience within a concert venue that leads to higher satisfaction and enjoyment of the performance. ‍

What are Cues?

Research has demonstrated that performers can communicate specific emotions to an audience, but the nature of this mechanism has largely been ignored. Performers express feelings through cues, which then affect a listener's judgment of the emotional expression conveyed by the performance. The function of these cues is to enhance the emotional impact of the musical performance as well as clarify the content structure. In short, a performer is usually telling a story through lyrics and music, that are meant to induce a specific emotion in listeners. Performers make this happen with cues.

Performers combine two types of cues, visual and auditory when creating a musical experience. The acoustic cues allow performers to add their unique style or code to a performance. But watching a performer, it seems clear that the performer also tries to communicate unspoken information to listeners through the use of gestures and facial expressions.

Auditory Cues

The first cues to be discussed are auditory, including tempo, sound level, articulation variability, and spectrum. These cues seem hardwired into humanity, as even young children can evoke these variables during singing to express specific emotions. But among performers, there appears to be a large range in the code for expressing emotion. How can emotional expression be successful without a somewhat universal code between listeners and performers?

Juslin (2000) crafted a study around performers' utilization of these cues to better examine the role of these cues and how listeners can effectively use them even with different cue codes. The study asked three guitarists to perform three short melodies to communicate four universal emotions: sadness, happiness, anger, and fear.

Investigating Auditory Cues

Juslin applied the Lense Model Equation (LME) to relate the listeners and performers. The LME equation outlined two key variables for measuring cue utilization:

  • Achievement – how accurately the performer’s intention is translated to the listener’s emotional judgment/experience

  • Matching – how closely the performer’s code of cues matches the listeners

Auditory Cues music

Juslin's hypothesis on the success of emotional communication using auditory cues was correct. All emotions were effectively communicated by the guitarists through auditory cues. Even within the same melodies, the three guitarists could create a distinction between sadness, fear, happiness, and anger.  

Anger was the most consistently identified by listeners across all three guitarists. The achievement for anger was significantly higher than fear, sadness, and happiness. While fear, sadness, and happiness lagged behind anger, they displayed no significant differences in achievement rates. The similarities between listener and performer utilization of cues, known as matching, followed the same ranking order as achievement: anger, fear, happiness, and sadness.

Of the five cues Juslin focused on, the listeners and performers shared the most code similarities in sound level and articulation. Sound level and articulation may be more hardwired into the human understanding of emotion than other cues. Although listeners emphasized the importance of tempo, performers were more likely to cite articulation as the most important.

Achievement was high across the board. The Lens Model Equation revealed that the performer's expressive intention could explain 70% of the variance in the listener's perceptions. This means listeners are highly influenced by a performer’s cues. The study results suggest that there is no pressure to mimic a specific expressive code because the range of cues works together to influence a listener's judgment. This means two performers can use different cue utilization strategies and still reach a relatively similar level of achievement. Performers can develop unique performance styles because a listener's brain combines cues flexibly. Again these cues appear to have an underlying empathetic element that is a universal human experience.

Origin of Cue Utilization

Juslin suggests that the origin of the nonverbal code that performers use to elicit specific emotions in listeners is related to the same brain programming for the vocal expression of emotions. Listeners and performers have broadly consistent associations between specific cues with specific emotions:

  • Anger – fast tempo, legato articulation, small articulation variability, and very high sound level

  • Happiness – fast tempo, staccato articulation, high articulation variability, high sound level

  • Sadness – slow tempo, legato articulation, small articulation variability, and low sound level

  • Fear – slow tempo, staccato articulation, high articulation variability, and very low sound level

Performers use the same code when performing that is used during vocal expression. This indicates there is an intimate relationship between the human voice and music. Juslin further theorizes that cue utilization and code establishment begin with the relationship between mother and infant. This lifelong process is influenced by extramusical life experiences, which might explain the wide variability in differences in cue utilization.  

Visual Cues

Beyond auditory cues, visual cues are another component of the musical experience contributing to emotional expression. According to Thompson, Graham, and Russo, communication in music extends beyond sounds and involves continuously changing and meaningful use of visual cues. These visual cues include body movements, facial expressions, and hand gestures.

Before the introduction of the radio and gramophone, musical performances were historically experienced as combined audio-visual experiences. Thompson, Graham, and Russo wanted to examine the use of visual signals in performances to understand better how they impact the listener's perception of the performance.

Modes, Genres, and Medium

The researchers suggest that musical performances must be examined through three influencing components: mode, genre, and medium. Each of these components contributes to the final perception of a musical performance. You can consider these three components through a funnel, with medium influencing the final emotional experience to a lesser degree than genre and genre influencing to a lesser degree than mode.

music modes genre medium

Medium

The medium is the channel through which the performer communicates to the listener. In the past, the medium was always through direct experience, but today musical mediums include television and streaming. A given medium can accommodate a wide range of genres, although the technical properties of certain mediums may be better suited to some genres than others.

Genre

Genre describes a conventional category in which music is separated based on patterned interactions and shared characteristics. Genres are identified by a specific format, style, and often content. For example, country music is music that includes components like ballads, dance songs, folk lyrics, string instruments, banjos, guitars, and fiddles. Genres can be divided into subgenres for further resolution of a specific musical sound and experience. Multiple characteristic modes usually constitute a given genre.

Mode

Mode describes the specific cues used by a performer, including tone of voice, gestures, word choice, facial expression, tempo, and many more. Mode is the particular way through which performers texture and form emotional expression.

The fact that Mode has the highest-level of influence is supported by research into auditory cues. The performer has a much greater impact on the reception of the music than previously thought because humans are conditioned to react empathetically to emotional cues.

Facial Expressions, Gestures, and Body Movements

The study observed performances from two well-known performers, B.B. King and Judy Garland. These performers used unique visual cues to contribute to the overall emotional communication of a musical piece. Visual cues can be separated into four categories:

  • Emblems – body movements or signaling that translates to a specific verbal message within a culture, like the peace sign

  • Illustrators – body movements for clarification or emphasis, such as pointing

  • Regulators – body movements that regulate the pace and content, like eye contact and head nods

  • Affective Displays – facial expressions that communicate the emotional state of the content, such as smiling or frowning

B.B King emphasized dissonance using affect displays, frequently through a pained expression, closed eyes, and head shaking. Audience members interpret these facial expressions and gestures to mean that B.B. King is struggling against difficult emotions and reflecting upon life experiences. King also used these cues to indicate which passages were challenging to play, signaling concentration and which were satisfying to play, signaling enjoyment and ease. King's body movements reflected the large-scale structure of the music, as he exaggerated his movements during the climatic moments.

music artist expression

Judy Garland incorporated visual cues throughout her performances that closely match the lyrical content. When singing lyrics with negative emotions, like 'I was lost,' she used a swimming motion to elicit images of her lost at sea and searching for a way out. During the word 'tossed,' she added a tossing gesture. Garland used a rhythmic illustrator when singing 'for love came just in time' by snapping her fingers. Garland used her whole body in performances, walking towards the crowd or camera to highlight musical changes and thrusting her hand forward to indicate resolution at the end of a performance.

Interestingly musicians use these same facial expressions, gestures, and body movements in closed recordings. It seems that visual cues are an integral part of communicating emotional expression through music, regardless of whether those cues are visible to listeners or not. This coincides closely with the use of facial expressions and gestures during speech. Human brains can pickup on visual cues, without a direct visual representation, because we are hardwired for empathy. When a performer channels an emotion, their audio and visual combined into a powerful inducer of emotions. Even when half of the experience is missing, the listener’s can hone in on the performer’s body movements and facial expressions.

The Reemergence of the Visual Dimension

Listeners are highly sensitive to the emotional content of music, and performers use various techniques, including visual cues, to express that emotional meaning. Between the 1920s and 1990s, music transitioned from a combined visual and aural experience to a solely aural experience. But in the last 20 years, this trend seems to be reversing.

In 1981 MTV started streaming music videos 24 hours a day, a critical success. Then in the early 2000s, YouTube provided another way for musicians to display their performances with audio and visual information in the form of on-demand music videos. Today the visual dimension of music has been reintroduced through new technologies like TikTok. Performers and content creators create incredible choreography, story-telling, and audio-visual experiences for people worldwide. The form has certainly shortened, but the impact is still substantial. The visual aspects of music performance may be more influential today than at any other time in history.

Set and Setting

In live performance situations, there is an added level of complexity to the overall judgment of listeners. Audience members experience the aural and visual components of a musical performance with the additional context of set and setting. Thompson wanted to understand further how a listener's perception of performance is affected by set, setting, and the quality of the performance itself.

He asked actual audience members attending a performance of Lutoslawki's Mi-Part and Chopin's Piano Concerto No. 1 in E minor, Op. 11, to rate the performance along the following dimensions:

  • Perceived quality of the performance

  • Linking for and familiarity with the music

  • Emotional response to the music

  • Satisfaction with the concert hall

  • Enjoyment of performance as a whole

He used a real audience because he wanted reactions from individuals that explicitly chose to attend the performance and wanted to be there. Ninety-one participants fully completed the study form and were entered into the study.

Setting Contamination

A performer does not have total control over performance perception and final judgment. Performers control the quality of the performance, although how the quality is perceived depends on the level of music training of audience members. Performers also have control over the specific visual and auditory cues they incorporate into their performance.

music setting

But performers have little influence over the mood of audience members, their specific musical tastes, the perception of the music venue, the comfort of the music venue, and a host of other factors. In this way, live performances are susceptible to 'contamination' by non-musical factors.

Affective Response is the Most Important Factor

Thompson found that the perceived quality of the performance was just one factor contributing to the overall enjoyment of a musical performance. Surprisingly, audience members could clearly distinguish their overall affective response from the quality of the performance. This suggests that performance quality is not a key factor in how well a performance is received or the success of emotional communication. The relative affective response among audience members was the most significant factor in the overall enjoyment of the performance.

Thompson had theorized that Chopin would be better liked overall because Chopin was a much higher profile composer than Lutoslawski. He believed familiarity with the piece or artist influenced the overall affective response to the performance. Instead, the study revealed little statistical significance between the overall affective response and previous familiarity.

How Performance, Set, and Setting Combine

Research by Juslin has revealed the innate ability of listener’s to flexibly understand a performer’s unique auditory cues. A singular code does not need to be shared by everyone, because there is enough overlap and natural empathy, that listener’s understand the performer’s emotional intent. This same idea extends into visual cues, where facial expressions, gestures, and body movements are incorporated into a performance to provide important non-auditory information. Yet amazingly, humans understand the associated visual cues of a musical piece even without seeing the performer. The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues. Adding further influence into the equation, in the form of set and setting, doesn’t change the overall experience for audience members. While the setting can introduce some non-musical contamination, the affective response is still the leading variable affecting the overall performance judgement.

Introduction

The musical experience is influenced by several factors, including auditory and visual cues and the set and setting in live performances. Talented performers consciously or unconsciously understand how these factors interact and use them to improve the reception of performances and the effectiveness of emotional communication. A performer's effective use of auditory and visual cues explains up to 70% of the variance in the emotion perceived from listening. And while performers have individualized codes, thanks to non-musical life experiences, listeners can flexibly combine these cues to arrive at the intended emotional response. A listener's code doesn't need to explicitly match the performers for the emotion to be accurately transmitted. Humans appear hardwired to understand the visual components of music even when they can’t actually see the visual component. Essentially, we can feel the emotion and imagine the performer’s facial expressions and gestures when listening to music. Performers that evoke these visual cues during recording will more effectively transmit emotion by tapping into the human hardwiring for empathy. When humans listen to someone experiencing an emotion, we feel the emotion as if it originated within ourselves. Beyond visual and auditory cues, the set and setting play an integral part in live performance reception. But the setting, including the venue and its relative comfort, can add environmental ‘contamination’ to the reception of a performance. Performers cannot control the audiences mood or personal likes and dislikes. Yet there’s something special about a live performance, the combined audio-visual experience within a concert venue that leads to higher satisfaction and enjoyment of the performance. ‍

What are Cues?

Research has demonstrated that performers can communicate specific emotions to an audience, but the nature of this mechanism has largely been ignored. Performers express feelings through cues, which then affect a listener's judgment of the emotional expression conveyed by the performance. The function of these cues is to enhance the emotional impact of the musical performance as well as clarify the content structure. In short, a performer is usually telling a story through lyrics and music, that are meant to induce a specific emotion in listeners. Performers make this happen with cues.

Performers combine two types of cues, visual and auditory when creating a musical experience. The acoustic cues allow performers to add their unique style or code to a performance. But watching a performer, it seems clear that the performer also tries to communicate unspoken information to listeners through the use of gestures and facial expressions.

Auditory Cues

The first cues to be discussed are auditory, including tempo, sound level, articulation variability, and spectrum. These cues seem hardwired into humanity, as even young children can evoke these variables during singing to express specific emotions. But among performers, there appears to be a large range in the code for expressing emotion. How can emotional expression be successful without a somewhat universal code between listeners and performers?

Juslin (2000) crafted a study around performers' utilization of these cues to better examine the role of these cues and how listeners can effectively use them even with different cue codes. The study asked three guitarists to perform three short melodies to communicate four universal emotions: sadness, happiness, anger, and fear.

Investigating Auditory Cues

Juslin applied the Lense Model Equation (LME) to relate the listeners and performers. The LME equation outlined two key variables for measuring cue utilization:

  • Achievement – how accurately the performer’s intention is translated to the listener’s emotional judgment/experience

  • Matching – how closely the performer’s code of cues matches the listeners

Auditory Cues music

Juslin's hypothesis on the success of emotional communication using auditory cues was correct. All emotions were effectively communicated by the guitarists through auditory cues. Even within the same melodies, the three guitarists could create a distinction between sadness, fear, happiness, and anger.  

Anger was the most consistently identified by listeners across all three guitarists. The achievement for anger was significantly higher than fear, sadness, and happiness. While fear, sadness, and happiness lagged behind anger, they displayed no significant differences in achievement rates. The similarities between listener and performer utilization of cues, known as matching, followed the same ranking order as achievement: anger, fear, happiness, and sadness.

Of the five cues Juslin focused on, the listeners and performers shared the most code similarities in sound level and articulation. Sound level and articulation may be more hardwired into the human understanding of emotion than other cues. Although listeners emphasized the importance of tempo, performers were more likely to cite articulation as the most important.

Achievement was high across the board. The Lens Model Equation revealed that the performer's expressive intention could explain 70% of the variance in the listener's perceptions. This means listeners are highly influenced by a performer’s cues. The study results suggest that there is no pressure to mimic a specific expressive code because the range of cues works together to influence a listener's judgment. This means two performers can use different cue utilization strategies and still reach a relatively similar level of achievement. Performers can develop unique performance styles because a listener's brain combines cues flexibly. Again these cues appear to have an underlying empathetic element that is a universal human experience.

Origin of Cue Utilization

Juslin suggests that the origin of the nonverbal code that performers use to elicit specific emotions in listeners is related to the same brain programming for the vocal expression of emotions. Listeners and performers have broadly consistent associations between specific cues with specific emotions:

  • Anger – fast tempo, legato articulation, small articulation variability, and very high sound level

  • Happiness – fast tempo, staccato articulation, high articulation variability, high sound level

  • Sadness – slow tempo, legato articulation, small articulation variability, and low sound level

  • Fear – slow tempo, staccato articulation, high articulation variability, and very low sound level

Performers use the same code when performing that is used during vocal expression. This indicates there is an intimate relationship between the human voice and music. Juslin further theorizes that cue utilization and code establishment begin with the relationship between mother and infant. This lifelong process is influenced by extramusical life experiences, which might explain the wide variability in differences in cue utilization.  

Visual Cues

Beyond auditory cues, visual cues are another component of the musical experience contributing to emotional expression. According to Thompson, Graham, and Russo, communication in music extends beyond sounds and involves continuously changing and meaningful use of visual cues. These visual cues include body movements, facial expressions, and hand gestures.

Before the introduction of the radio and gramophone, musical performances were historically experienced as combined audio-visual experiences. Thompson, Graham, and Russo wanted to examine the use of visual signals in performances to understand better how they impact the listener's perception of the performance.

Modes, Genres, and Medium

The researchers suggest that musical performances must be examined through three influencing components: mode, genre, and medium. Each of these components contributes to the final perception of a musical performance. You can consider these three components through a funnel, with medium influencing the final emotional experience to a lesser degree than genre and genre influencing to a lesser degree than mode.

music modes genre medium

Medium

The medium is the channel through which the performer communicates to the listener. In the past, the medium was always through direct experience, but today musical mediums include television and streaming. A given medium can accommodate a wide range of genres, although the technical properties of certain mediums may be better suited to some genres than others.

Genre

Genre describes a conventional category in which music is separated based on patterned interactions and shared characteristics. Genres are identified by a specific format, style, and often content. For example, country music is music that includes components like ballads, dance songs, folk lyrics, string instruments, banjos, guitars, and fiddles. Genres can be divided into subgenres for further resolution of a specific musical sound and experience. Multiple characteristic modes usually constitute a given genre.

Mode

Mode describes the specific cues used by a performer, including tone of voice, gestures, word choice, facial expression, tempo, and many more. Mode is the particular way through which performers texture and form emotional expression.

The fact that Mode has the highest-level of influence is supported by research into auditory cues. The performer has a much greater impact on the reception of the music than previously thought because humans are conditioned to react empathetically to emotional cues.

Facial Expressions, Gestures, and Body Movements

The study observed performances from two well-known performers, B.B. King and Judy Garland. These performers used unique visual cues to contribute to the overall emotional communication of a musical piece. Visual cues can be separated into four categories:

  • Emblems – body movements or signaling that translates to a specific verbal message within a culture, like the peace sign

  • Illustrators – body movements for clarification or emphasis, such as pointing

  • Regulators – body movements that regulate the pace and content, like eye contact and head nods

  • Affective Displays – facial expressions that communicate the emotional state of the content, such as smiling or frowning

B.B King emphasized dissonance using affect displays, frequently through a pained expression, closed eyes, and head shaking. Audience members interpret these facial expressions and gestures to mean that B.B. King is struggling against difficult emotions and reflecting upon life experiences. King also used these cues to indicate which passages were challenging to play, signaling concentration and which were satisfying to play, signaling enjoyment and ease. King's body movements reflected the large-scale structure of the music, as he exaggerated his movements during the climatic moments.

music artist expression

Judy Garland incorporated visual cues throughout her performances that closely match the lyrical content. When singing lyrics with negative emotions, like 'I was lost,' she used a swimming motion to elicit images of her lost at sea and searching for a way out. During the word 'tossed,' she added a tossing gesture. Garland used a rhythmic illustrator when singing 'for love came just in time' by snapping her fingers. Garland used her whole body in performances, walking towards the crowd or camera to highlight musical changes and thrusting her hand forward to indicate resolution at the end of a performance.

Interestingly musicians use these same facial expressions, gestures, and body movements in closed recordings. It seems that visual cues are an integral part of communicating emotional expression through music, regardless of whether those cues are visible to listeners or not. This coincides closely with the use of facial expressions and gestures during speech. Human brains can pickup on visual cues, without a direct visual representation, because we are hardwired for empathy. When a performer channels an emotion, their audio and visual combined into a powerful inducer of emotions. Even when half of the experience is missing, the listener’s can hone in on the performer’s body movements and facial expressions.

The Reemergence of the Visual Dimension

Listeners are highly sensitive to the emotional content of music, and performers use various techniques, including visual cues, to express that emotional meaning. Between the 1920s and 1990s, music transitioned from a combined visual and aural experience to a solely aural experience. But in the last 20 years, this trend seems to be reversing.

In 1981 MTV started streaming music videos 24 hours a day, a critical success. Then in the early 2000s, YouTube provided another way for musicians to display their performances with audio and visual information in the form of on-demand music videos. Today the visual dimension of music has been reintroduced through new technologies like TikTok. Performers and content creators create incredible choreography, story-telling, and audio-visual experiences for people worldwide. The form has certainly shortened, but the impact is still substantial. The visual aspects of music performance may be more influential today than at any other time in history.

Set and Setting

In live performance situations, there is an added level of complexity to the overall judgment of listeners. Audience members experience the aural and visual components of a musical performance with the additional context of set and setting. Thompson wanted to understand further how a listener's perception of performance is affected by set, setting, and the quality of the performance itself.

He asked actual audience members attending a performance of Lutoslawki's Mi-Part and Chopin's Piano Concerto No. 1 in E minor, Op. 11, to rate the performance along the following dimensions:

  • Perceived quality of the performance

  • Linking for and familiarity with the music

  • Emotional response to the music

  • Satisfaction with the concert hall

  • Enjoyment of performance as a whole

He used a real audience because he wanted reactions from individuals that explicitly chose to attend the performance and wanted to be there. Ninety-one participants fully completed the study form and were entered into the study.

Setting Contamination

A performer does not have total control over performance perception and final judgment. Performers control the quality of the performance, although how the quality is perceived depends on the level of music training of audience members. Performers also have control over the specific visual and auditory cues they incorporate into their performance.

music setting

But performers have little influence over the mood of audience members, their specific musical tastes, the perception of the music venue, the comfort of the music venue, and a host of other factors. In this way, live performances are susceptible to 'contamination' by non-musical factors.

Affective Response is the Most Important Factor

Thompson found that the perceived quality of the performance was just one factor contributing to the overall enjoyment of a musical performance. Surprisingly, audience members could clearly distinguish their overall affective response from the quality of the performance. This suggests that performance quality is not a key factor in how well a performance is received or the success of emotional communication. The relative affective response among audience members was the most significant factor in the overall enjoyment of the performance.

Thompson had theorized that Chopin would be better liked overall because Chopin was a much higher profile composer than Lutoslawski. He believed familiarity with the piece or artist influenced the overall affective response to the performance. Instead, the study revealed little statistical significance between the overall affective response and previous familiarity.

How Performance, Set, and Setting Combine

Research by Juslin has revealed the innate ability of listener’s to flexibly understand a performer’s unique auditory cues. A singular code does not need to be shared by everyone, because there is enough overlap and natural empathy, that listener’s understand the performer’s emotional intent. This same idea extends into visual cues, where facial expressions, gestures, and body movements are incorporated into a performance to provide important non-auditory information. Yet amazingly, humans understand the associated visual cues of a musical piece even without seeing the performer. The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues. Adding further influence into the equation, in the form of set and setting, doesn’t change the overall experience for audience members. While the setting can introduce some non-musical contamination, the affective response is still the leading variable affecting the overall performance judgement.

The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues.

The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues.

The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues.

The more the performer channels their emotions, using auditory and visual representations, the better listener’s can naturally empathize with the emotions even when visual information is missing. 70% of a listener’s final affective perceptions result from the performer’s use of cues.

VISUAL ACOUSTIC EXPERIENCE

VISUAL ACOUSTIC EXPERIENCE

Cutting-edge startup redefining sensory experiences. We create unparalleled technology for immersion in auditory landscapes.

Meet our blog author, a blockchain enthusiast and fintech expert with a passion for sharing insights on decentralized finance trends.

Music from the different perspective

Don't miss a beat - experience musical bits from the articles below.

View More

lsd music

Psychology

Understanding the altered perception of music while on LSD sheds light on the broader relationship between psychedelics and sensory perception. It raises questions about the mind's ability to perceive reality, the flexibility of our sensory processing, and how deeply music is woven into the human experience.

Read

lsd music

Psychology

Understanding the altered perception of music while on LSD sheds light on the broader relationship between psychedelics and sensory perception. It raises questions about the mind's ability to perceive reality, the flexibility of our sensory processing, and how deeply music is woven into the human experience.

Read

lsd music

Psychology

Understanding the altered perception of music while on LSD sheds light on the broader relationship between psychedelics and sensory perception. It raises questions about the mind's ability to perceive reality, the flexibility of our sensory processing, and how deeply music is woven into the human experience.

Read

Metaphysics

Brain music is an intriguing intersection of neuroscience and auditory experience. It refers to a variety of phenomena where the human brain interacts with music, whether it be the neurological impacts of listening to music or the sonification of brain waves into audible frequencies.

Read

Metaphysics

Brain music is an intriguing intersection of neuroscience and auditory experience. It refers to a variety of phenomena where the human brain interacts with music, whether it be the neurological impacts of listening to music or the sonification of brain waves into audible frequencies.

Read

Metaphysics

Brain music is an intriguing intersection of neuroscience and auditory experience. It refers to a variety of phenomena where the human brain interacts with music, whether it be the neurological impacts of listening to music or the sonification of brain waves into audible frequencies.

Read

healing frequencies

Metaphysics

When discussing sound healing, we often refer to specific frequencies that are believed to have particular benefits. For instance, the Solfeggio frequencies, a series of six tones that date back to early sacred music, are claimed to have properties ranging from repairing DNA to opening the heart chakra.

Read

healing frequencies

Metaphysics

When discussing sound healing, we often refer to specific frequencies that are believed to have particular benefits. For instance, the Solfeggio frequencies, a series of six tones that date back to early sacred music, are claimed to have properties ranging from repairing DNA to opening the heart chakra.

Read

healing frequencies

Metaphysics

When discussing sound healing, we often refer to specific frequencies that are believed to have particular benefits. For instance, the Solfeggio frequencies, a series of six tones that date back to early sacred music, are claimed to have properties ranging from repairing DNA to opening the heart chakra.

Read

solfeggio frequencies

Metaphysics

Solfeggio frequencies are ancient tones believed to have healing properties. This article delves into their history, from Gregorian Chants to modern rediscovery, examines their effects on emotional and physical well-being, and scrutinizes the scientific research behind these mysterious frequencies.

Read

solfeggio frequencies

Metaphysics

Solfeggio frequencies are ancient tones believed to have healing properties. This article delves into their history, from Gregorian Chants to modern rediscovery, examines their effects on emotional and physical well-being, and scrutinizes the scientific research behind these mysterious frequencies.

Read

solfeggio frequencies

Metaphysics

Solfeggio frequencies are ancient tones believed to have healing properties. This article delves into their history, from Gregorian Chants to modern rediscovery, examines their effects on emotional and physical well-being, and scrutinizes the scientific research behind these mysterious frequencies.

Read

Cutting-edge startup redefining sensory experiences. We create unparalleled technology for immersion in auditory landscapes.

Copyright ©2024 VA Visual Acoustic Technologies GmbH. All rights reserved.

Cutting-edge startup redefining sensory experiences. We create unparalleled technology for immersion in auditory landscapes.

Copyright ©2024 VA Visual Acoustic Technologies GmbH. All rights reserved.

Cutting-edge startup redefining sensory experiences. We create unparalleled technology for immersion in auditory landscapes.

Copyright ©2024 VA Visual Acoustic Technologies GmbH. All rights reserved.

Cutting-edge startup redefining sensory experiences. We create unparalleled technology for immersion in auditory landscapes.

Copyright ©2024 VA Visual Acoustic Technologies GmbH. All rights reserved.