Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's OK to be biased towards humans, published by dr s on November 12, 2023 on LessWrong.
Let's talk about art.
In the wake of AI art generators being released, it's become pretty clear this will have a seismic effect on the art industry all across - from illustrators, to comic artists, to animators, many categories see their livelihood threatened, with no obvious "higher level" opened by this wave of automation for them to move to. On top of this, the AI generators seem to have mostly been trained with material whose copyright status is... dubious, at the very least. Images have been scraped from the internet, frames have been taken from movies, and in general lots of stuff that would usually count as "pirated" if you or I just downloaded it for our private use has been thrown by the terabyte inside diffusion models that can now churn out endless variations on the styles and models they fitted over them.
On top of being a legal quandary, this issues border into the philosophical. Broadly speaking, one tends to see two interpretations:
the AI enthusiasts and companies tend to portray this process as "learning". AIs aren't really plagiarizing, they're merely using all that data to infer patterns, such as "what is an apple" or "what does Michelangelo's style look like". They can then apply those patterns to produce new works, but these are merely transformative remixes of the originals, akin to what any human artist does when drawing from their own creative inspirations and experiences.
the artists on the other hand respond that the AI is not learning in any way resembling what humans do, but is merely regurgitating minor variations on its training set materials, and as such it is not "creative" in any meaningful sense of the world - merely a way for corporations to whitewash mass-plagiarism and resell illegally acquired materials.
Now, both these arguments have their good points and their glaring flaws. If I was hard pressed to say what is it that I think AI models are really doing I would probably end up answering "neither of these two, but a secret third thing". They probably don't learn the way humans do. They probably do learn in some meaningful sense of the word, they seem too good at generalizing stuff for the idea of them being mere plagiarizers to be a defensible position.
I am similarly conflicted in matters of copyright. I am not a fan of our current copyright laws, which I think are far too strict, to the point of stifling rather than incentivizing creativity, but also, it is a very questionable double standard that after years of having to deal with DRM and restrictions imposed in an often losing war against piracy now I simply have to accept that a big enough company can build a billion dollars business from terabytes of illegally scraped material.
None of these things, however, I believe, cut at the heart of the problem. Even if modern AIs were not sophisticated enough to "truly" learn from art, future ones could be. Even if modern AIs have been trained on material that was not lawfully acquired, future ones could be. And I doubt that artists would then feel OK with said AIs replacing them, now that all philosophical and legal technicalities are satisfied; their true beef cuts far deeper than that.
Observe how the two arguments above go, stripped to their essence:
AIs have some property that is "human-like", therefore, they must be treated exactly as humans;
AIs should not be treated as humans because they lack any "human-like" property.
The thing to note is that argument 1 (A, hence B) sets the tone; argument 2 then strives to refuse its premise so that it can deny the conclusion (Not A, hence Not B), but it accepts and in fact reinforces the unspoken assumption that having human-like properties means you get to be treated as a human.
I suggest an alter...