I’ve talked about it many times before, but the way music uses sounds, notes, and pitches is no different than the way films, video games, or any other visual medium use imagery and color. Just like how we assign meaning to certain colors and images, we also assign emotions and meanings to certain sounds. For example, on a cultural level, red represents passion or violence, and a minor chord represents sadness. We have an implicit cultural understanding of certain sounds, and when they are all put together, they can form a certain feeling, progression, or narrative that ends up creating a satisfying listening experience.

Basically, whether anyone is aware of it or not, we all speak a musical language, where sentences, words, and alphabets exist as musical phrases, chords, and notes. I guess I’m mixing metaphors here. Let’s just say that instead of doing bad writing, what I’m trying to do is demonstrate that all forms of expression–from music to imagery to language–are not as different as we might think. It’s mostly just lousy writing though.

But it is also possible for composers and musicians to create their own musical languages in individual works. Richard Wagner was a composer that understood the idea of a musical language, and used it to his advantage when writing operas. He was one of the first people to popularize the idea of “leitmotif.” Basically, a leitmotif is a short musical phrase that is associated with a person, place, thing, or idea. Like, let’s say he had a leitmotif that represented a main character, then he had a leitmotif that represented…. uhh let’s say stabbing. If there was a scene where a guy got stabbed,  he could put the two phrases together to better represent the emotion of the scene (is stabbing an emotion?). He used these leitmotifs to create operas that used musical phrases to drive the narrative and emotion of the opera forward, allowing him to create some of the genre’s most acclaimed works.

This idea has carried over to film scoring. Take for instance the iconic John Williams Star Wars theme music, which is often called “Luke’s Theme.” “Luke’s Theme,”(or parts of it, at least) is actually used throughout the film at many different points to underscore Luke’s emotional state. If he’s doing something heroic, then suddenly we hear the theme in familiar Star Wars fanfare. But when Luke is scared or uneasy? The theme is softer, and more subdued to reflect the tone of the scene and Luke’s emotions.

So, what does this have to do with video games? Well, an issue I’ve experienced with many modern titles (especially in more open-world, sandbox kinds of games or games that go for a cinematic feel) is that they often lack musical cues that can heighten the emotion and satisfaction of gameplay. Scripted sequences and cut-scenes often have cinematic scores that punctuate the moods of the scene to great effect. But when it comes to user-dictated gameplay, the only thing a player hears is ambient noise or possibly an extremely subdued score. The music very rarely reacts to the player’s actions, which leads to a less satisfying experience.

Game composers and designers could work together to create leitmotifs that are triggered at key moments, by key characters, or by picking up certain objects. Pick up a powerful new weapon? The game reacts with an empowering leitmotif layered into the soundtrack. Losing health? A suspenseful leitmotif could occur. By layering different themes on top of one another, a dynamic soundtrack could be done in a subtle but effective way. Suddenly, the soundtrack becomes just as interactive and reactive as the game itself. And more importantly: the music of the game underscores the emotions of the player.

I’m no game designer, mostly just a game appreciator, but given my very basic understanding of game programming, this could be implemented pretty easily. I mean, games already use certain checkpoints to cue music, so I don’t see why that concept couldn’t be expanded and used in other ways. Plus, it’s been done plenty of times before.

There are examples of where this idea has already been put to use to great effect. For example, the “Yoshi Theme” in Super Mario World that layers on top of the main theme of the level only heightens the “hell yeah, I just got Yoshi” feeling a player gets. Or the sudden leitmotif that plays when a character is spotted (and the getaway theme) in the Metal Gear Solid series is effective enough to trigger a Pavlovian response in gamers even when it is completely divorced from the gameplay experience.

Or I’m sure others could think of a million more examples. I know in Banjo-Kazooie, when you are higher up in certain areas, the level’s music becomes lighter and airier with wind sounds overlayed. Or when you are submerged underwater, the music sounds much more midrange-y, just like sounds do when you are actually submerged underwater. Not quite the same, but along the same lines.

It all shows what is possible in terms of giving video games more immersion and atmosphere through the use of dynamic and constantly changing music. More importantly, it gives the player’s actions more weight and consequence, something all games can benefit from. It’s just a shame that the examples I outlined earlier are all only specific instances, and there aren’t more games that allow player actions to dictate the musical score on a larger scale.

While plenty of examples exist, there still is endless potential for this idea to be discussed and developed, and for video game music to become a much more integral part of games as they grow as a storytelling medium. However, in order for this to happen, collaboration needs to occur between composers and game developers the way it does between visual designers and game developers. Meaning, the first step to making video game music better is to work on our communication regarding the extensive musical language we all implicitly understand, yet struggle to discuss.

But games are still a growing medium, and if people care enough, there’s still plenty of time to make music in games better.