✘ Quantum music, or the brain as walkman
And: A £1 idea to save small music venues; Dance music's bitter division over boycotts; DEI in public funding; Detecting AI music shouldn't be opt-out; Who's responsible for labeling AI music
I’ve been trying to wrap my head around quantum computing for years now, but I feel like I’m no further than the bare bone basics. Computing systems are built on Boolean logic, which means it’s binary, which means it consists of 0s and 1s. Now, what quantum does is to allow for 0 and 1 to exist at the same time in superposition. What does that mean? How can we grasp this theory? And, most importantly, what does it mean for music?
A little history
In the late 19th and early 20th Centuries scientists tried to determine and explain phenomena they observed at the tiniest scale possible: atomic. This led to famous theories like Planck’s constant, Einstein’s relitavity theory, and Heisenberg’s Uncertainty Principle. Without going into detail, what was happening was that there was a shift in understanding from the world as an observable whole to understanding that things like space and time are not absolute. Now, all these elements became dynamic.
Computer science started out based on Boolean logic. Everything we use to interact with each through computing is based on the bit: binary digit. But even early on in the development of computer technology quantum mechanics existed. The transistors that powered the first computers and still power our laptops, phones, desktops, etc. today all require quantum mechanics for their design. There is, in other words, an entanglement between quantum and computer science since the beginning.
In quantum computing, that state of superposition - where 0 and 1 occur at the same time, gets captured in a qubit: a quantum bit. Developing qubits is hard, because they cannot be in touch with their environments. There’s also a need for superconducting that requires a nuclear by-product. The qubit is also where it gets difficult to comprehend, because we don’t experience this state in our everyday human lives. However, it happens at the atomic level all the time. There are analogies, like Schrödinger’s cat or a spinning coin. These can definitely be helpful to understand that there’s a state of unknowing where multiple things - or at least two things - might be possible. There’s a good explainer video from the Technical University Delft on how quantum computers work.
When quantum gets creative
If you watched the video, you already understand that it’s at scale that quantum computing starts to matter. It can also, through something called entanglement, link qubits together over vast distances. Eduardo Reck Miranda, one of the foremost innovators in quantum music, used this to compose a work released last year called Qubism.
For its first performance he collaborated with the London Sinfonietta and had a violinist duet with an IBM quantum computer about 5500 kilometres away. This worked because of quantum entanglement. Here’s Reck Miranda on how this worked:
“At specific moments during the performance, the quantum computer “listened” to the violin and produced responses. [We] developed a system to extract sequencing rules from input music. The system encodes the rules into quantum circuits. The circuits tell a quantum computer to generate wavefunctions with amplitudes encoding musical stochasticity. In other words, they encode the probabilities of certain notes following others in a tune. A measurement defines which note follows another.
To generate the responses, a laptop on the stage recorded the violin, made the quantum circuits and relayed them to the quantum computer on the cloud for processing. Then, after a few seconds, the measurements were retrieved from the cloud, and the respective musical responses were synthesised.”
While Qubism was released as a fairly traditional album, this points the way to a world where music moves beyond the recorded product as the final product.
A next stage for music
There is, of course, precedent for thinking about music beyond the traditional recording. We see musicians sharing their stems to allow fans to rework songs they love. We see musicians sharing interactive apps that help fans play with musical elements and create new musical experiences for themselves.
And then there’s procedural music in gaming, where the music you hear as a player gets influenced by your actions. Karen Collins defined procedural music as a: “composition that evolves in real time according to a specific set of rules.” A famous example is the game No Man’s Sky, who’s sound designer Paul Weir has spoken at length about the possibilities and limitations this brings along. Here he is explaining how the tool they created specifically for the game works.
Cherie Hu, from Water & Music, once asked my students at Utrecht University in relation to this: how to make procedural music interesting and even listenable, outside of the game environment. It’s this question that we can now start to answer with the advent of quantum music.
Reactive music & generative listening
The idea of music that responds to our environment and our selves isn’t new. Two years ago, I wrote about brain-computer-music-interfaces - where I mentioned Reck Miranda’s Brainwave Quartet. That year, Tristra also wrote about how music is losing it’s static qualities. There’s many examples of apps trying to establish a kind of interactive soundtrack to our lives. Take RjDj, or Reality Jockey, a now defunct start-up that created an app that responded to various inputs like speed, sound, visuals, etc. through various sensors. Through these inputs, the person using RjDj influenced the music they heard through their actions - just like in a game.
Reality Jockey also developed the Inception App, which was released with the film and which aimed to induce dream states through sonic inputs based on environmental and action cues. They called this ‘reactive music’, which is also taken from gaming. Take the example of Ape Out (h/t to Cherie again for this one), where a vast library of drum samples create the soundtrack based on activity and environmental cues.
Playing games that involve procedural music, and this can be the expansive No Man’s Sky like games or the more closed-world reactive music games like Ape Out, creates a different engagement with those worlds. Bringing that into the real world breaks open a whole new set of possibilities. Similar to the quantum state of superposition and building on the ‘walkman effect’ this leads to multiple different sound worlds existing at the same time.
This extends beyond the music and the composition and also involves the listener, the player. Drawing on researchers like Salome Voegelin, we can understand this listening to be generative. In her book Sonic Possible Worlds, Voegelin argues for something she terms generative listening, which of course resembles the idea of generative music. One thing to pick up on from Voegelin’s work is the importance of the materiality of the world, both physical and sonic, on the affect of listening. The listener thus has an active role and is herself superimposed and entangled in the sound worlds.
One area where this idea of quantum music and brain-computer-music-interfaces find practical application is that of health and wellbeing. One startup working on this is Kuma, which takes the idea of our brains as instrument and applies it to influencing our emotional state.
Kuma uses qubits to take the input from the brain and generates soundscapes in response. This works similarly to the interaction between the violinist and the quantum computer in Reck Miranda’s Qubism. Our listening becomes generative, our listening becomes quantum, where we find new ways to express the way we superimpose our selves into the world around us.
The brain as walkman
Around the time Realtime Jockey was founded in 2008 and Karen Collins published her seminar writing on procedural music in 2009, the world was enthralled with the mobile phone. The iPhone was hailed as a kind of walkman for generative music, even if it never really lived up to that promise. What we’re seeing now, more than 15 years later, is that with quantum computing and the harnessing of qubits it’s not the phone but our brains and bodies that are becoming the walkman.
LINKS
💷 There’s a £1 idea that could save small music venues. Is Live Nation holding it back? (Dan McCarthy)
“But a huge funding gap remains. For concerts in 2025, more than 22m of 24.2m eligible tickets are sold without the £1 contribution, according to industry data shared with the Guardian. And while the early picture for 2026 shows positive momentum, with uptake rising to 28%, this still translates to millions in missing potential support. “The industry is very good at adding fees where the company adding the fee is the beneficiary, and not quite so efficient when the money is for the wider ecosystem,” says Mark Davyd, chief executive of Music Venue Trust.”
✘ Such a simple idea, such a logical idea, such a well-supported idea - and yet it resists implementation. Is this £1 idea antimemetic?
📆 ‘We’re ripping ourselves to shreds’: with dance music bitterly divided, how far should cultural boycotts go? (Phin Jennings)
“It is often emerging artists who lose out. An appearance on Boiler Room can jumpstart a DJ’s career, but that’s now offset against the reputational cost of being perceived as having stepped out of line. “These smaller acts, who can barely get by paying their rent, are always the ones sacrificing themselves,” Jyoty says.”
✘ If you need a more objective account of what’s going on around private equity ownership and the festivals and parties we probably all know and live, this is a good one. It shows multiple sides to the discussion instead of just one. Moreover, it lands on the kind of conclusion that I personally think we should focus more on: let’s just build new things when we can’t align with existing structures anymore.
🏳️🌈 Diversity, Equity, and Inclusion in Cultural Public Funding, or Lack Thereof (Nadia Says)
“There are regular cases, some make it into the press, most do not, but personal bias and interest, or even corruption, definitely play a big part in the lack of DEI in cultural public funding. Sure, it is difficult to fully put aside one’s own tastes when awarding cultural funds; we are only human after all. So how about having diverse cultural workers and experts who can then distribute the funds to a wider range of artists and projects? Again, some institutions try and have diverse decision panels, but this is not yet the norm and most of the decisions are still predominantly made by white middle and higher class folks who believe in heteronormativity and do not even seem to know that trans people exist.”
✘ A piece from 2023, but relevant in my recent pieces on Creative R&D and the need for diversified funding models. What’s more, it also resonates with the above piece about private equity issues. One model that is often held up against it as a kind of ‘other end of the spectrum’ is public funding. Of course, that doesn’t come without its own issues and it’s important to be aware of them.
🕵️♀️ Why detecting AI music copyright matters more after Sora’s opt-out disaster (Virginie Berger)
“For the music industry, this moment was familiar. The opt-out model is their present reality. Every day, AI systems ingest millions of songs to learn harmonic patterns, melodic structures, production techniques. Synthetic tracks multiply across streaming services; some clearly derivative, and others are ambiguously close to existing works. Legal frameworks lag. Detection tools struggle. And creators are left with a choice: constantly monitor the internet for infringement or accept that their work will be fodder for the next generation of AI models.”
✘ Virginie builds off the article I shared last week as well around neural fingerprinting. Technology solutions are popping up for the well-documented problems around music, AI, and the input and output issues surrounding attribution and detection.
🏷️ Who’s responsible for labeling AI music? (Cherie Hu)
“But with rights holders now feeling a direct threat to their market share from AI tracks, labeling has quickly emerged as the first and most popular line of defense. In response, competing philosophies have emerged. Streaming service Deezer has adopted a top-down, platform-enforced model, using proprietary technology to detect and label AI content automatically. In contrast, market leader Spotify is advocating for a collaborative, supply-chain disclosure model that relies on voluntary metadata submissions from creators, rights holders, and distributors.”
✘ Virginie’s piece and Cherie’s piece here can be read together well. We’re in an interesting moment around labelling/attribution and how that will impact what gets monetized, how that gets done, and who controls this. The power dynamics are real.
MUSIC
I love Kitbashing, the new record by ABADIR. Here’s what they write themselves about: “Using audio snippets from reels, suggested posts, and sponsored ads, sound objects are cut, micro-edited and meshed together into a collage. Tension is built gradually before falling into a release. Rhythms keep expanding and contracting, simulating the pace of random scrolling on mobile devices. ‘Kitbashing’ is an attempt to break the algorithm: subverting the dynamics by feeding on the algorithm instead of only feeding it.”



omg Swirling Qubits RULES! What a dope project! Love your line of thinking in this one, Maarten!
The Radio Jockey concept was folded into Massive Attacks ‘Fantom’ app experiment which used the addition of heart rate, sentiment (face tracking) to time of day, cadence and motion to create personal mixes of album tracks. The mixes were recorded onto the Ethereum blockchain in a trial of the Blokur on chain rights management system.