The “Forever” Stakes of Generative AI Ownership
On synthetic data and dead labour
Last month, music writer and worker Bas Graymayer wrote an interesting column over on his Calm & Fluffy newsletter exploring the distribution of ownership over emerging creative technology platforms, and more specifically, the ownership of generative music AI platforms, those trained on almost the entirety of recorded music.
In the piece, Bas correctly points out that most generative AI companies have trained their models on data1 that they scraped freely from the internet—the world’s music, books, blog posts, etc. He also notes that among big technology companies, the way in which they capture value is both via direct revenues AND speculative valuations in public markets:
Meta, TikTok, Amazon, Google, and similar companies generate revenue by aggregating the value created by many individuals. This creates value in terms of revenue, as well as speculative value in the form of shares.
The speculative value of Spotify, for example, which only just recently achieved its first full year net profit in 2024, and which aggregates the value created by every artist on the platform, is captured in its stock, which is owned by its founders and shareholders2 and generates massive returns. In May of this year, Music Business Worldwide reported that Daniel Ek has, to date, cashed out $800M+ in Spotify stock. Meanwhile, artists with music on the platform receive somewhere between $0.003 and $0.005 per play (depending on monthly pro-rata calculations), and those songs with under 1000 plays in a year are now completely uncompensated.
There is a clear pattern: technology companies build their businesses on the value created by artists and other creative workers, while maintaining ownership and governance3 structures that primarily funnel that value to founders and shareholders. In practice, this means that investors, who contribute no labour, capture a significant share of the returns through passive income, while founders and executives often receive rewards that are disproportionate to their actual labour contributions. Meanwhile, the artists and regular employees whose work underpins the continuing value of the entire enterprise see far less of the upside, despite bearing much of the creative and operational burden. This model is, of course, simply business as usual under our system of capitalism, and the music industry is no exception.
This brings us to the specific case of large-scale generative music AI.4 Platforms such as Suno and Udio built their systems without first seeking consent or securing licenses from artists. Instead, they adopted an “ask forgiveness later” strategy, scraping vast amounts of music from across the internet to train their models without negotiating copyright licensing agreements or paying for the use of that material.5
While some companies who trained their models without consent are now striking post facto licensing deals, others are fighting in the courts to claim that this scraping and training meets the requirements of ‘fair use’, with the goal to pay the low fee of $0 for training on the entirety of the world’s music.
What’s clear to most of us is that companies are using artists’ copyrighted work to train generative AI systems that can produce new songs designed to mimic, and directly compete with, the originals in the marketplace.
News stories confirming this market disruption arrive daily. Recently, Rolling Stone reported that the Welsh hardcore band Holding Absence discovered an AI-generated act called “Bleeding Verse,” which cites them as an influence and has already surpassed their monthly listeners on Spotify. This, despite Holding Absence having toured internationally and released music since 2015. On Deezer, 28% of all music delivered to the platform is AI-generated.6 A recent study by CISAC warns that AI-generated music could constitute 60% of music libraries’ revenues by 2028, disrupting sync as a form of revenue for human artists.
This logic is often dismissed with a wave of the hand: “artists can just make more music.” But this ignores the reality that artists are now forced to compete not only with others, but with infinite versions of themselves. Every “copy” of their sound generated by AI dilutes their presence in the marketplace, fragments the attention of the audience they worked hard to develop, and drives down the value of their work. It’s like the uncanny doubles in Jordan Peele’s Us trying to replace their originals: a distorted mirror image crowding out the real thing.
In summary, what we are seeing here is AI-companies taking the music created through the labour of artists and using it to generate value that provides little benefit to those artists.
Given this, Bas asks the important question:
Should people and companies whose data is used to train LLMs participate in the financial success of these corporations?
To this, I would argue yes, absolutely! Those who create the underlying value should share in the profits it generates. This is not only a matter of financial participation. It is also a question of power and equitable behaviour. If generative AI platforms are built on models trained with artists’ copyrighted works, then artists must have an ongoing stake in how those models are governed, deployed, and commercialized. Financial participation should also be paired with meaningful governance rights.
Understanding the Stakes
Bas’s argument points us toward a critical question: who benefits when creative labour is absorbed into the infrastructures of AI? His framing around ownership highlights how the value generated by these systems flows overwhelmingly to the companies and investors who control them, and not to the artists, writers, and cultural workers whose work makes these systems possible.
Where I want to build on this is by emphasizing just how uniquely high the stakes of this ownership question are when it comes to generative AI. Unlike past technological shifts, where artists could at least retain some control over their work through licensing and withdrawal, the integration of copyrighted material into AI models effectively makes that control difficult to reverse. Once creative labour is embedded in a model’s weights, it becomes part of a system that can produce endless outputs without ever returning to the source of that labour.
This is not simply a question of missed royalties; it’s about the permanent transfer of both economic value and governance (decision-making) power. If artists are excluded from ownership and governance now, they risk being locked out of the systems that will shape the future of cultural production.
The “forever” stakes of AI lie in this moment of consolidation, where failing to secure a meaningful role in the structure of these platforms means surrendering control over the creative commons for generations to come.
Let’s walk through the stakes more clearly.

Scholar David Harvey provides us with a simple but powerful diagram illustrating how commodities are produced and capital accumulates through the circulation of value. In his model, labour power sits at the foundation of the system, serving as an essential input in the creation of commodities (see Fig. 1, circled in red).
To translate this into music industry terms: artists and creative workers provide the labour that gets turned into a sellable product — the song or album. Songwriters, performers, and recording teams put in the creative work, which is then paired with capital: studios, instruments, and recording technology.7 Once the music is produced, it’s distributed through labels, publishers, distributors, and digital streaming platforms. While artists do receive a share through royalties or rights income, a much larger portion of the value flows upward via those who control the infrastructure.8 This is why debates over streaming payouts and artist compensation are, at their core, debates about who captures the value generated by creative labour.
In the traditional music industry, the continual production of music and the profits tied to it depend on a steady input of new creative labour. Artists must keep writing, performing, and recording new songs to bring fresh commodities to market. Everyone else in the value chain — labels, publishers, distributors, and platforms — relies on that ongoing creative output and must, in some form, negotiate with artists or rights holders to access it. While these negotiations often reflect unequal power dynamics, they at least acknowledge the artist as the source of value. And even within an exploitative system, that dependence gives artists a degree of leverage, especially now that digital distribution has expanded their pathways to reach audiences directly.
If traditional exploitation is bad, generative AI is catastrophic.
Generative AI shatters the above dynamic. Instead of relying on artists to produce new work, AI systems extract the value of their past labour, using it to generate endless new, monetizable outputs. What was once ongoing exploitation becomes outright expropriation: the entire future value contained in artists’ creations can be captured and monetized without their participation, consent, or control.9

This marks a fundamental break in the circuit of cultural production. In the traditional structure, new songs require a continuous input of living labour. Artists must actively create new works for the circuit to continue.
Generative AI collapses the structure depicted in Harvey’s diagram inward (see: Fig. 2), converting the accumulated archive of creative production into what Marx called dead labour: the crystallized results of human creativity, now stored as training data inside technological systems.10 In this new circuit, Harvey’s original annotation of “Free Gifts of Human Nature,” which refers to the foundational role of human creativity and labour in production, takes on a starkly literal meaning: artists’ creative work becomes a free, unending resource for generative AI companies to produce value.
Effectively, through the training process, the accumulated history of creative labour, which is embodied in millions of copyrighted musical works, has been converted into capital, materialized as generative AI models that artists neither own nor control. What once required a steady input of human labour has been absorbed into technological systems capable of endlessly generating new commodities (songs, sounds, and styles!) on demand. This dynamic becomes even more extreme when companies train models on synthetic data — content generated by the model itself — allowing these systems to reproduce and expand without ever returning to living creative labour as a source.
This marks a profound shift: AI platforms can now grow through dead creative labour alone. The model’s owners — not the artists and creative workers who produced the songs — can extract virtually unlimited value from decades of creative labour with near-zero marginal creative labour costs going forward.11 This isn’t simply a continuation of exploitation; it’s a different form of accumulation in which artists’ past work is endlessly monetized while their ongoing creative agency, and share in the value chain, is erased. They are dispossessed of their labour.12
What makes this situation even more consequential is its potential for permanence. On platforms like Spotify, artists retain at least some control. They can withdraw their music once a license ends or if a platform behaves in ways that violate their ethical standards. But with generative AI models, post-licensing removal is difficult if not impossible. As The Trichordist observes, “even if unauthorized data is deleted, trained models still ‘remember’ it.” Even when licensing is legal and time-limited, disentangling a model from the influence of a specific artist’s work is technically difficult. Once their labour has been absorbed into a model’s weights, artists lose not only compensation but also meaningful control over how their work shapes future cultural production.
Securing the Future of Music!
Generative AI represents not just another technological disruption in music, but a structural turning point.
By absorbing the accumulated labour of artists into models that can endlessly generate new works without them, these systems threaten to permanently sever the link between creative labour and the value it produces.
What was once exploitation becomes expropriation: the extraction of cultural wealth without consent, compensation, or control.13
Where in the past artists could generate revenues through time-bound licensing, while retaining control over their works into the future, with generative AI that control effectively disappears. Once their creations are absorbed into a model’s training data, the influence of their work becomes permanent, allowing companies to generate new outputs indefinitely without the artist’s consent, participation, or compensation.
This is why equity ownership for artists cannot be treated as a symbolic gesture or a simple revenue-sharing fix. It must mean real stakes. This includes both financial ownership, ensuring artists benefit directly from the value their work creates, and governance power, ensuring they have a meaningful say in how these models are built, deployed, and monetized.
The stakes are clear. If we are locked into a future where generative music AI is going to exist, then the question becomes whether artists will have any power within it.14
While this essay focuses on outlining the stakes of creative dispossession, it’s equally important to imagine pathways toward reclaiming power. A meaningful solution lies in artist-led technological development, encompassing systems built, owned, and governed by the very people whose work constitutes their foundation. Of course, the large corporations and investors powering generative AI development are unlikely to voluntarily surrender equity. But by making the terms of this extraction visible, artists can be better equipped to organize, advocate, and push back against exploitative licensing deals, or to build alternatives altogether.15
Another path lies in the regulatory sphere, where various proposals are emerging. For example, some have suggested treating AI models similar to public utilities, subject to public oversight and transparency. Exploring these ideas in detail is very worthwhile, (but beyond the scope of this essay).
If artists and creative workers do not secure ownership and governance now, they risk being permanently excluded from the infrastructures that will make up a large part of future cultural production. Without intervention, these systems will continue to extract value from artists’ and creative workers while erasing their agency and economic stake. Securing ownership is not about embracing an AI future. Rather, it is about refusing one in which artists are written out of it entirely.16
LINKS
This week, no extra links, because, honestly, I’ve already buried you in enough of them in the footnotes😂! Think of the footnotes below as a “choose your own adventure” reading list, and dive in!
MUSIC
I’m always returning to Neil Young’s ‘On the Beach’, which is his best album (fight me! Ditch Trilogy forever!). The album is a complete work of genius, but Vampire Blues is a perennial fav. It’s a loose, swampy track. Its imagery of bloodsucking capitalists (in this case, oil barons) resonates with the way Marx described capital’s vampire-like hunger for living labour (which I cite in one of my footnotes below). The song has a way of sounding almost casual, but underneath, it’s furious. It’s a reminder that the dynamics I’m describing in this essay aren’t new—they just keep finding new hosts.
A primary argument that I’m making in this piece is that the digital ‘song’ data being ingested by generative AI models embodies creative labour. Whenever I mention data in the piece, I am therefore also making reference to the creative labour underlying that data.
Famously, Spotify adopted an “ask for forgiveness later” approach to licensing: rather than securing full licenses before launch, it made deals retroactively, ultimately giving equity stakes to its major licensors (including all three major labels and Merlin). While this piece is not focused on artist–label relations, it is worth noting that there was significant debate over whether the labels and organizations that received these equity stakes were obligated to pass on their value to artists. This highlights another key point of leakage related to that discussed in this essay: the value generated through ownership in a music platform does not necessarily flow through to the artists themselves.
While Spotify is publicly traded, its governance remains highly concentrated through a dual-class stock structure that grants co-founders Daniel Ek and Martin Lorentzon effective voting control of the company. Shareholders may influence the company indirectly through market activity, but ultimate decision-making power is centralized in the hands of these two individuals. See “Spotify’s ESG Fail: Governance,” The Trichordist, March 14, 2022 for the Spotify case.
For more on the historical development of corporate regulation, see Cooper (2025), who traces how corporate governance structures have evolved to entrench founder power through special share classes, and demonstrates how this shift has steadily eroded the influence of mass public shareholders and weakened democratic oversight of corporations.
Large-scale generative music AI is a class of machine learning systems trained on large datasets of recorded music, often encompassing entire catalogs or genres, to learn underlying musical patterns and structures. These systems can then autonomously generate new, full-length compositions that imitate or remix those patterns, without requiring direct human musical input at the point of creation. This differs from single-artist or single-catalog generative AI models that are trained on clearly licensed, bounded repertoires, where ownership, control, attribution, and compensation remain tied to identifiable rights holders. In contrast, generative music AI systems are typically trained on vast, often unlicensed or ambiguously licensed datasets, making questions of ownership, attribution, compensation, and consent far more complex. See: Water & Music for an excellent breakdown of the different approaches being taken towards attribution in large-scale generative systems, and the challenges associated with each.
This “move fast and break things” strategy reflects a broader Silicon Valley ethos that has shaped the tech industry for decades. Companies like OpenAI have employed similar approaches, and earlier, figures like Steve Jobs and companies like Meta and Uber normalized ignoring legal and ethical boundaries to gain first-mover advantage. Notably, however, some firms have demonstrated that it is entirely feasible to secure permissions in advance. As an example, UMG’s licensing agreement with Endel, which enables the creation of artist-specific AI models built on artist-authorized catalog material, shows that upfront licensing frameworks can be built into technological development rather than bypassed.
Streaming platforms also have a structural financial incentive to embrace generative AI content. Because royalties are typically calculated on a per-stream basis, works classified as “low value” or delivered through lower-rate licensing agreements (as is often the case with AI-generated music) cost the platform less per play. If users shift their listening away from higher-royalty, human-created works toward cheaper AI-generated content, the platform’s overall payout obligations decrease while subscription or advertising revenue remains constant, effectively increasing its profit margins.
Traditionally, record labels were the primary source of capital for producing and releasing music. In recent years, however, this model has broadened to include a wider range of financial actors. Today, artists can access resources not only from labels but also from other forms of institutional capital including banks, investment funds, and venture capital firms. For example, beatBread is a platform that connects artists directly with these investors, offering an alternative to traditional label deals.
What distinguishes the current landscape from previous eras is the scale at which digital platforms and supply chains operate, which dramatically increases their ability to monopolize and act as choke points in digital markets, thereby increasing the surface area for rent extraction and deepening the structural exploitation of artists’ labour. These platforms don’t fundamentally alter the underlying value chain. Instead, they intensify existing power dynamics by enabling value to be captured at an unprecedented scale. For more on the distinct features of platform capitalism, see: Srnicek (2016).
Nancy Fraser’s Cannibal Capitalism (2022) offers a strong framework for understanding the important theoretical distinction between exploitation (the extraction of surplus value from waged labour) and expropriation (the seizure of resources, labour, and life from populations). She traces how expropriation has historically been organized along racial and colonial lines, making it a foundational element of capitalist accumulation. Following Fraser’s arguments, I would note that the expropriation I am describing in this essay is amplified for racialized artists, owing to the long and ongoing history of expropriation of these populations in the music industry and beyond.
In Marx’s framework, “dead labour” refers to past labour embodied in machinery, tools, and other fixed capital. This dead labour cannot itself create value; it only becomes productive when set in motion by living labour: i.e. when workers recombine with it to produce new commodities. Marx writes in Capital Vol. 1 that “Capital is dead labour, that, vampire-like, only lives by sucking living labour, and lives the more, the more labour it sucks.” This vampiric metaphor brings to mind a musical reference: Neil Young’s very excellent ‘Vampire Blues’, from his best album, On The Beach. In fact, you should probably take a break from reading this footnote and go listen to it now!
What’s novel in the case of generative AI is that the dead creative labour embodied in training datasets and model weights can generate new outputs without new, direct human creative inputs. In effect, creative labour is displaced entirely from the production loop, allowing capital to reproduce and expand itself without returning to creative workers as a source of value.
To be clear, I am referring specifically to the displacement of creative labour. The production system I am analyzing still depends on a layer of infrastructural and technical labour performed by workers at AI companies, but this labour does not reintroduce artists themselves into the value-producing circuit. And while I believe this technical and infrastructural labour should itself be properly compensated, including through mechanisms like fair wages and equity, it remains fundamentally distinct from the creative labour that originally generated the cultural material being monetized.
David Harvey writes extensively about accumulation by dispossession, a concept he develops to explain how capital continues to expand through the privatization, enclosure, or expropriation of resources and social goods that were previously held in common.
Related arguments are increasingly surfacing in public discourse, where AI is framed not only as a tool of economic expropriation, but as a form of cognitive colonization. This framing underscores how generative systems don’t just appropriate creative labour but actively reshape the conditions of human thought and subjectivity. Early empirical research suggests that frequent AI use can lead to measurable declines in skills such as literacy, reading comprehension, and critical thinking, while also promoting cognitive offloading and dependency. In this sense, generative AI represents a double expropriation: first of artists’ labour, and second of our own cognitive capacities.
To be clear, my concerns about large-scale generative AI extend beyond questions of ownership. These systems also have enormous environmental impacts, consuming vast amounts of energy and resources at a time when the planet is already facing accelerating climate crises. The bottom right corner of Harvey’s diagram (Fig. 1) in this essay refers directly to these resource inputs in Marxist terms, as “Free Gifts of Nature”, which we know are not limitless, despite how markets may or may not price them. For more on capitalism’s failure to value nature, see: Battistoni (2025). Nonetheless, the momentum and capital flowing into the development of these models continues to grow, making it clear that they are not simply going to disappear anytime soon. This essay responds to the world we are in, not the one I might prefer, where such systems might be reconsidered entirely.
For those who might argue that “giving artists equity in tech platforms is too difficult, especially for disorganized musicians,” projects like Subvert demonstrate otherwise. As a cooperative marketplace platform for music artists, it has grown rapidly, reaching 9000+ artists, 1,600+ labels, and 1,200+ supporters in just over a year. “Hard” is not the same as “impossible,” and it certainly isn’t a justification for expropriating artists’ labour.
Thank you to Guillaume Decouflet for his many insightful comments and suggestions (including the very evocative analogy to Us), which strengthened the arguments throughout this essay. And as always, thanks to Maarten for having me. :)


"What was once ongoing exploitation becomes outright expropriation: the entire future value contained in artists’ creations can be captured and monetized without their participation, consent, or control."
Spot on!
Thanks for this very poignant piece and all the extra weekend reading through the footnotes •‿•
The real winners are the ISPs and Telcos. Tax them accordingly.