It’s OK to be biased towards humans

Let’s talk about art.

In the wake of AI art generators being released, it’s become pretty clear this will have a seismic effect on the art industry all across—from illustrators, to comic artists, to animators, many categories see their livelihood threatened, with no obvious “higher level” opened by this wave of automation for them to move to. On top of this, the AI generators seem to have mostly been trained with material whose copyright status is… dubious, at the very least. Images have been scraped from the internet, frames have been taken from movies, and in general lots of stuff that would usually count as “pirated” if you or I just downloaded it for our private use has been thrown by the terabyte inside diffusion models that can now churn out endless variations on the styles and models they fitted over them.

On top of being a legal quandary, this issues border into the philosophical. Broadly speaking, one tends to see two interpretations:

  1. the AI enthusiasts and companies tend to portray this process as “learning”. AIs aren’t really plagiarizing, they’re merely using all that data to infer patterns, such as “what is an apple” or “what does Michelangelo’s style look like”. They can then apply those patterns to produce new works, but these are merely transformative remixes of the originals, akin to what any human artist does when drawing from their own creative inspirations and experiences. After all, “good artists copy, great artists steal”, as Picasso said;

  2. the artists on the other hand respond that the AI is not learning in any way resembling what humans do, but is merely regurgitating minor variations on its training set materials, and as such it is not “creative” in any meaningful sense of the world—merely a way for corporations to whitewash mass-plagiarism and resell illegally acquired materials.

Now, both these arguments have their good points and their glaring flaws. If I was hard pressed to say what is it that I think AI models are really doing I would probably end up answering “neither of these two, but a secret third thing”. They probably don’t learn the way humans do. They probably do learn in some meaningful sense of the word, they seem too good at generalizing stuff for the idea of them being mere plagiarizers to be a defensible position. I am similarly conflicted in matters of copyright. I am not a fan of our current copyright laws, which I think are far too strict, to the point of stifling rather than incentivizing creativity, but also, it is a very questionable double standard that after years of having to deal with DRM and restrictions imposed in an often losing war against piracy now I simply have to accept that a big enough company can build a billion dollars business from terabytes of illegally scraped material.

None of these things, however, I believe, cut at the heart of the problem. Even if modern AIs were not sophisticated enough to “truly” learn from art, future ones could be. Even if modern AIs have been trained on material that was not lawfully acquired, future ones could be. And I doubt that artists would then feel OK with said AIs replacing them, now that all philosophical and legal technicalities are satisfied; their true beef cuts far deeper than that.

Observe how the two arguments above go, stripped to their essence:

  1. AIs have some property that is “human-like”, therefore, they must be treated exactly as humans;

  2. AIs should not be treated as humans because they lack any “human-like” property.

The thing to note is that argument 1 (A, hence B) sets the tone; argument 2 then strives to refuse its premise so that it can deny the conclusion (Not A, hence Not B), but it accepts and in fact reinforces the unspoken assumption that having human-like properties means you get to be treated as a human.

I suggest an alternative argument:

AIs may as well have some properties that are “human-like”, but as they still are clearly NOT human, they do not get to be treated as one.

This argument cuts through all the fluff to strike at the heart of the issue: is our philosophy humanist, or is it not? If human welfare, happiness and thriving are not the terminal values to which everything else in society is oriented towards, what is? One does not need any justification to put humans above other entities. At some point, the buck stops; if our values focus on improving human life, nothing else needs to be said.

I feel like this argument may appear distasteful because it too closely resembles some viewpoints we’ve learned to be extremely wary of. It does after all single out a group (humans) and put it on top of our hierarchy without providing any particular rhyme or reason other than “I belong to it and so do my friends and family”. The lesson learned from things like racism or sexism is to be always willing to expand our circle of concern, to look at commonalities that lie beyond circumstances of birth or accidents, and seek some shared properties (usually cognitive ones: intelligence, self-awareness, the ability to suffer, morality) that unite us instead, looking past superficial differences. So, I think that for most people an argument that goes “I support X because I simply do, and I don’t have to explain myself any further” triggers some kind of bad gut reaction. It feels wrong, close-minded, bigoted. Always we seek a lower layer, a more fundamental, simple, elegant principle to invoke in our defense of X, a sort of Grand Unified Theory of Moral Worth. This tendency to search for simpler and simpler principles risks, ironically, to be turned against us in the age of AI. One should make their theory of moral worth as simple as possible, but not any simpler. Racism and sexism are bad because they diminish the dignity of other humans; I reserve the right to not give a rat’s ass[1] about the rights of an AI just because its cognitive processes have some passing resemblance to my own[2].

Let’s talk about life.

When it comes to the possibility of the advent of some kind of AI super-intelligence, all sorts of takes exist on the topic. Some people think it can’t happen, some people think it won’t be as big of a deal as it sounds, some people think it’ll kill us all and that’s bad, and some people think it’ll kill us all and that’s perfectly fine. Many of the typical arguments can be heard in this Richard Sutton video: if AI is even better at being smart and knowledgeable than us, then why shouldn’t we simply bow out and let it take over, the way a parent knows when to leave room to their children? It is fear or bigotry to be prejudiced towards it, after all it might be human-like and in fact better than humans at these very human things, these uniquely human things, and the sort of thing that if you’re a lover of progress you may even consider as the very apex of human achievement. It’s selfish to not acknowledge that AI would just be our superior, and deserve our spot.

To which we should be able to puff up our chests and proudly answer:

If that is selfish, then let us be selfish. What’s wrong with being selfish?

It is just the same rhetorical trap as before. Boil down the essence of humanity to some abstract trait like cognition, then show something better at cognition than us and call it our successor. But we do not really prize cognition for its own sake either. We prize things like science and knowledge because they make our lives better, or sometimes because they are just plain fun. A book full of demonstrations of the most wondrous theorems floating in the vacuum of an empty universe would be only a dumb, worthless lump of carbon. It takes someone to read the book for it to be precious.

It takes a human.

Now let me be clear—when I say “human”, I actually mean a bit more than that. I mean that humans have certain people-y qualities that I enjoy and that I feel make them worth caring for, though they are hard to pin down. I think these people-y qualities are not necessarily exclusive to us; in some measures, many non-human animals do possess them, and I cherish them in those too. And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot. I can expand my circle of concern beyond humans just fine; I just don’t think the basis to do so is simply some other thing’s ability to mock or even improve upon some of our cognitive faculties. I am not sure what precisely could be a good description of these people-y qualities. But I think an art generator AI that can spit out any work in any style based on a simple description as a simple prediction operation based off a database probably doesn’t possess them; and I think any super-intelligence that would be willing to do things like strip-mine the Earth to its core to build more compute for itself in a relentless drive to optimization definitely doesn’t possess them.

If future humans will ever be satisfied by an AI they created so much that they will be willing to entrust it with their future, then that will be that. I don’t know if the moment will ever come, but it would be their choice to make. But the thing we should not do is buy into a belief system in which the worth of humans is made dependent on some bare bones quality that humans happen to possess, and that can then be improved upon, leading to some kind of gotcha where we’re either guilt-tripped into admitting that AI is superior to us and deserves to replace us, or vice versa, forced to deny its cognitive ability even in the face of overwhelming evidence. Reject the assumption. Preferring humans just because they’re humans, just because we are, is certainly a form of bias.

And for once, it’s a fine one.

  1. ^

    That is, a rationalist’s ass.

  2. ^

    As an aside, it’d be also interesting to see what would happen if one took things to the opposite extreme instead. If companies argue that generative AIs can use copyrighted materials because they’re merely “learning” from it like humans, fine, treat them like humans then. Forbid owning them, or making them work for you without payment, and see where that goes—or whether it makes sense at all. If AIs are like people, then the people they’re most like are slaves; and paid workers have good reason to protest the unfair competition of corporation-owned slaves.