✖ A playbook for musicians wanting to stay relevant in the age of audio-first
And: the stickiest social apps; gaming and creativity in music; PRIMAVOC rapid testing trial; Tomorrowland's virtual stages
There’s an interesting trope in the study of sound, namely that listening places the subject squarely in the centre of their environment whereas seeing places them on the edge of it looking in. Social media, in the broadest sense of the term, have mostly been visual, focusing on actors showing gilded visions of their lives and other people looking in via images and videos. More recently, however, there’s a lot of hype around audio: from Spotify’s ‘audio first’ strategy to Clubhouse and from Matthew Ball’s ‘Audio’s Opportunity’ to Andrew Chen’s social+ audio. Both the platforms and the investors are seeing a bright future for audio. It’s not hard to imagine why, in the words of Chen:
The draw of audio apps over other traditional formats is obvious to any podcast (and music) devotee: the ease. That lean-back, hands-free experience means that audio apps generally don’t compete with a vast competitive library of other startups. Instead, they compete with washing dishes, working out, driving ... Easy competition!
This means it’s straighforward to engage with audio while doing something else. And yet, I can’t imagine most people that create audio experiences online (lofi hip-hop type creators aside) are looking for people to divide their attention. At the same time, I bet you’ll have a hard time remembering when was the last time you gave an album your undivided attention. I will explore three ways through which music can stay relevant if audio is the future: lean into the leanback experience; create embodied aural experiences; become user-generated content.
The lean-back experience
As listeners, we love to discover new music through the playlists that proliferate on streaming services. However, we mostly start a playlist for ease of use and because we don’t want to think about what we want to listen to specifically. In the early years of Spotify there was often a distinction between the on-demand, more lean-in listening the service offered versus radio or Pandora. Two years ago Spotify even launched a Pandora competitor called Spotify Stations. But even then, Spotify’s main app was already very heavy on playlists, something that lead Liz Pelly to write, in her 2017 article on the automation of selling out, that
“playlists have spawned a new type of music listener, one who thinks less about the artist or album they are seeking out, and instead connects with emotions, moods and activities, where they just pick a playlist and let it roll.”
How can musicians gain from this? By composing music that fits all these mood-based and activity-based playlists. There’s no place, however, for avant-garde, noisy, difficult music, just for chill, mellow, background audio.
Embodied aural experiences
Listening is always a two-way street, a back and forth between at least two people. Current developments with gaming and VR bring this listening experience into a new type of embodied experience for music: the virtual. Inside these virtual environments, even more so than the visual, the aural cues give a true sense of spatiality. Musicians can play into these development by creating music that fits this purpose and plays into audio’s strengths of place-making. One way of doing this is by recording music with binaural audio. Simply put, this means setting up a recording studio to replicate the sound capture as if it were your own two ears who do it.
An added benefit of both binaural sound and gaming and VR worlds is that they involve headphones. By closing off the world around you this focuses our attention away from sounds around us and towards the virtual environment we embody. Being inside these worlds requires attentive listening and music can help steer people’s movements and develop story lines.
User-generated music
There’s a seemingly endless potential to user-generated content, but music - and even audio more generally - still needs to properly tap into it. That we’re on the precipice of this seems evident through the many creator tools that proliferate at the moment. All of these tools and the ease of access to music production they provide allow talent to surface more easily than a couple of decades ago (hello BeatStars). Similarly, they allow for exactly the type of music creation that benefits the ubiquitous playlists I mentioned earlier (hello fake artists).
Is there a middle ground or sweet spot where these extremes come together? If we turn to MIDiA’s research into the rising power of UGC the answer is no. Instead of looking for that sweet spot, we should see this as a spectrum, a continuum. UGC, for example, reaches audiences that professionally produced music often does not. Moreover, this
“vast array of [UGC] is distributed via multiple platforms, with the ultimate objective being to engage fans by absorbing their time in the attention economy. Along this continuum comes a spectrum of ways to compensate creators – be that from ad revenue, subscriptions, ticket sales or physical products.” (Rising Power of UGC, MIDiA, p. 23)
It’s possible for artists to guide the music creation process by opening up to their fans. Think of NIN’s Year Zero marketing campaign, Mike Shinoda’s Twitch album, or Tones and I’s Roblox sound pack.
A final point here: when looking at the potential for UGC and music it’s important to make a distinction between artists and non-artists. By which I mean that the ubiquity of creator tools and UGC allow everyone to make cool sounds cheaply and quickly, but that doesn’t mean everyone is suddenly an innovator who shifts public perception and kcikstarts new trends.
Conclusion: shared listening
Listening to sound gives you a central experience of the space the experience takes place. As we adapt to the rhythms of our changing virtual environments musicians need to find how they can resonate. They have to think about how their music and music-making processes helps them connect with their fans, find new audiences, and tap into new revenue streams. Equally, they need to consider how and where their music is experienced.
All of this is happening in a world that seems to be increasingly focused on audio, but not necessarily on music. And yet, listening helps to construct identity and that gives musicians the power to stay relevant within as audio cultures shift around them. They can lean into different listening disciplines and still push sonic boundaries.
Maarten Walraven
TECH
🖥️ An interview with Holly Herndon is always welcome and this one is no different as she talks eloquently about music, AI and machine learning, and the production of music.
“There’s this weird perception of me as a machine computer girl, but computers are part of us! It makes total sense that it sounds human because it is. It’s part of us. I think when we start to see AI - well, actually, I prefer the phrase machine learning, it’s more descriptive - as a separate entity, we erase the human input that went into it. We’re starting to see this as some kind of alien other that came out of nowhere but that’s misleading. So it is very human.”
⏏️ There’s an excellent companion piece to the a16z article by Andrew Chen that I mentioned in the intro above. The title says it all: The Stickiest, Most Addictive, Most Engaging, and Fastest-Growing Social Apps—and How to Measure Them. Soundcloud and Audiomack steal the show when it comes to music.
⚙️ Again resonating with the above piece, Cherie Hu asks how “gaming can lead to whole new ways of thinking about creativity and storytelling in music — in a way that draws direct influence from fans.”
👩💻 Saffron is a music tech initiative that wants to increase the number of women and non-binary people working in the industry. Currently, the % of women working in the music tech industry stands at just 5. They’re organising an event: 7 Days of Sound. Starting 23 January.
💨 Questions surrounding peak streaming, this time on Billboard.
CORONA
🎭 Fortune has a great article on how the pandemic will transform Broadway musicals. It states that musicals will head to the screen more and more. Historically, broadcasting a musical on TV has been complicated by the rights situation. This changed in March. The Fortune article sees this as a move that makes Broadway more available to a global audience. There are, of course, clear parallels to the music livestreaming industry.
🌱 Primavera Sound in Barcelona organised a successful trial called PRIMACOV in which they “welcomed 1,042 people to the 1,608-capacity venue for a clinical study designed to show whether rapid testing could hold the key to staging concerts without social distancing.” The results will be published in January.
🗺️ Tomorrowland has released a trailer showcasing their virtual stages for their New Year’s Eve party.
Organisers call the virtual setting future-proof indicating they will continue to use it post-pandemic.
🎄 In the US, Christmas music came earlier than ever on the radio. With some stations already starting rotation in July. And to positive feedback. People were, it seems, just ready for this year to end when it was only midway through.
🍁 Over in Canada we see a hopeful precedent with private-sector support coming in the form Kinaxis partnering with the Canadian Live Music Association to set up a fund for livestreams. “Promoters and festivals must apply in partnership with a live music venue, or indicate which venue they plan to rent for the stream.” Applications can be filled in here until 28 February for livestream running between January and June 2021.
Music
I mentioned Holly Herndon in the TECH round up. Her Proto was one of my favourite albums of 2019. It sounds like the future, while it’s also a very immediately human sound. Amazing work of electronic music.
✖ MUSIC x, founded by Bas Grasmayer and co-edited by Maarten Walraven.
❤️ patreon - twitter - musicxtechxfuture.com - musicxgreen.com