The media spectacle of generative AI (in which AI companies’ breathless claims of their software’s sorcerous powers are endlessly repeated) has understandably alarmed many creative workers, a group that’s already traumatized by extractive abuse by media and tech companies.
Even though the claims about “AI” are overblown and overhyped, creators are right to be alarmed. Their bosses would like nothing more than to fire them and replace them with pliable software. The “creative” industries talk a lot about how audiences should be paying for creative works, but the companies that bring creators’ works to market treat their own payments to creators as a cost to be minimized.
Creative labor markets are primarily regulated through copyright: the exclusive rights that accrue to creators at the moment that their works are “fixated.” Media and tech companies then bargain to buy or license those rights. The theory goes that the more expansive those rights are, the more they’ll be worth to corporations, and the more they’ll pay creators for them.
That’s the theory. In practice, we’ve spent 40 years expanding copyright. We’ve made it last longer; expanded it to cover more works, hiked the statutory damages for infringements and made it easier to prove violations. This has made the entertainment industry larger and more profitable – but the share of those profits going to creators has declined, both in real terms and proportionately.
In other words, today creators have more copyright, the companies that buy creators’ copyrights have more profits, but creators are poorer than they were 40 years ago. How can this be so?
As Rebecca Giblin and I explain in our book Chokepoint Capitalism, the sums creators get from media and tech companies aren’t determined by how durable or far-reaching copyright is – rather, they’re determined by the structure of the creative market.
The market is concentrated into monopolies. We have five big publishers, four big studios, three big labels, two big ad-tech companies, and one gargantuan ebook/audiobook company. The internet has been degraded into “five giant websites, each filled with screenshots from the other four”:
Under these conditions, giving a creator more copyright is like giving a bullied schoolkid extra lunch money. It doesn’t matter how much lunch money you give that kid – the bullies will take it all, and the kid will still go hungry (that’s still true even if the bullies spend some of that stolen lunch money on a PR campaign urging us all to think of the hungry children and give them even more lunch money):
But creative workers have been conditioned – by big media and tech companies – to reflexively turn to copyright as the cure-all for every pathology, and, predictably, there are loud, insistent calls (and a growing list of high-profile lawsuits) arguing that training a machine-learning system is a copyright infringement.
This is a bad theory. First, it’s bad as a matter of copyright law. Fundamentally, machine learning systems ingest a lot of works, analyze them, find statistical correlations between them, and then use those to make new works. It’s a math-heavy version of what every creator does: analyze how the works they admire are made, so they can make their own new works.
If you go through the pages of an art-book analyzing the color schemes or ratios of noses to foreheads in paintings you like, you are not infringing copyright. We should not create a new right to decide who is allowed to think hard about your creative works and learn from them – such a right would make it impossible for the next generation of creators to (lawfully) learn their craft:
(Sometimes, ML systems will plagiarize their own training data; that could be copyright infringement; but a) ML systems will doubtless get guardrails that block this plagiarism; and, b) even after that happens, creators will still worry about being displaced by ML systems trained on their works.)
We should learn from our recent history here. When sampling became a part of commercial hiphop music, some creators clamored for the right to control who could sample their work and to get paid when that happened. The musicians who sampled argued that inserting a few bars from a recording was akin to a jazz trumpeter who works a few bars of a popular song into a solo. They lost that argument, and today, anyone who wants to release a song commercially will be required – by radio stations, labels, and distributors – the clear that sample.
This change didn’t make musicians better off. The Big Three labels – Sony, Warners, and Universal, who control 70% of the world’s recorded music – now require musicians to sign away the rights to samples from their works. The labels also refuse to sell sampling licenses to musicians unless they are signed to one of the Big Three.
Thus, producing music with a sample requires that you take whatever terms the Big Three impose on you, including giving up the right to control sampling of your music. We gave the schoolkids more lunch money and the bullies took that, too.
The monopolists who control the creative industries are already getting ahead of the curve on this one. Companies that hire voice actors are requiring those actors to sign away the (as yet nonexistant) right to train a machine-learning model with their voices:
The National Association of Voice Actors is (quite rightly) advising its members not to sign contracts that make this outrageous demand, and they note that union actors are having success getting these clauses struck, even retroactively:
That’s not surprising – labor unions have a much better track record of getting artists’ paid than giving creators copyright and expecting them to bargain individually for the best deal they can get. But for non-union creators – the majority of us – getting this language struck is going to be a lot harder. Indeed, we already sign contracts full of absurd, unconscionable nonsense that our publishers, labels and studios refuse to negotiate:
Some of the loudest calls for exclusive rights over ML training are coming not from workers, but from media and tech companies. We creative workers can’t afford to let corporations create this right – and not just because they will use it against us. These corporations also have a track record of creating new exclusive rights that bite them in the ass.
For decades, media companies stretched copyright to cover works that were similar to existing works, trying to merge the idea of “inspired by” and “copied from,” assuming that they would be the ones preventing others from making “similar” new works.
But they failed to anticipate the (utterly predictable) rise of copyright trolls, who launched a string of lawsuits arguing that popular songs copied tiny phrases (or just the “feel”) of their clients’ songs. Pharrell Williams and Robin Thicke’s got sued into radioactive rubble by Marvin Gaye’s estate over their song “Blurred Lines” – which didn’t copy any of Gaye’s words or melodies, but rather, took its “feel”:
Today, every successful musician lives in dread of a multi-million-dollar lawsuit over incidental similarities to obscure tracks. Last spring, Ed Sheeran beat such a suit, but it was a hollow victory. As Sheeran said, with 60,000 new tracks being uploaded to Spotify every day, these similarities are inevitable:
The major labels are worried about this problem, too – but they are at a loss as to what to do about it. They are completely wedded to the idea that every part of music should be converted to property, so that they can expropriate it from creators and add it to their own bulging portfolios. Like a monkey trapped because it has reached through a hole into a hollow log to grab a banana that won’t fit back through the hole, the labels can’t bring themselves to let go.
That’s the curse of the monkey’s paw: the entertainment giants argued for everything to be converted to a tradeable exclusive right – and now the industry is being threatened by trolls and ML creeps who are bent on acquiring their own vast troves of pseudo-property.
There’s a better way. As NAVA president Tim Friedlander told Motherboard’s Joseph Cox, “NAVA is not anti-synthetic voices or anti-AI, we are pro voice actor. We want to ensure that voice actors are actively and equally involved in the evolution of our industry and don’t lose their agency or ability to be compensated fairly for their work and talent.”
This is as good a distillation of the true Luddite ethic as you could ask for. After all, the Luddites didn’t oppose textile automation: rather, they wanted a stake in its rollout and a fair share of its dividends:
Turning every part of the creative process into “IP” hasn’t made creators better off. All that’s it’s accomplished is to make it harder to create without taking terms from a giant corporation, whose terms inevitably include forcing you to trade all your IP away to them. That’s something that Spider Robinson prophesied in his Hugo-winning 1982 story, “Melancholy Elephants”: