I write software for a living and sometimes write on substack: https://taylorgordonlunt.substack.com/
Taylor G. Lunt
I was including “trying to save the world” in “whatever else you think you should be doing.”
I’m not sure I really endorse any view of identity or think it’s a coherent concept, but at the very least I think making a copy of something doesn’t make something that is that thing.
It seems like you’re projecting AI can capture >50% of GDP in the next 2-7 years (and I think your AI researchers timeline actually implies white collar work is replaced in 1-4 years), so you should invest heavily in AI. You’ll get more returns on your money from that than anything else by far, and can use the money to fund whatever else you think you should be doing.
I gave a speech last night, an introduction to AI apocalypse risk, and two people asked me if there are any good counter-arguments to my position. What would you have answered in my position?
Yes, for me the problem is moving a mind from a biological substrate to a digital one. It’s hard for me to imagine you’re actually moving the original, not just making a copy. Maybe there’s some way to do it, so I’m not totally confident.
The question about mind uploading feels a bit to me like, “would you be against 2 + 2 being 5, if it were possible?” I think it couldn’t be possible even in theory.
I think brain emulation could be possible though, and you could have essentially human minds running on a machine. I wouldn’t necessarily be against that, or even artificial intelligences that we are confident possess whatever is valuable about human minds (consciousness plus some other stuff probably). But as a biological human, I also have a vested interest in making sure if this replacement happens, it happens in a way that doesn’t screw over existing biological humans. In particular, if we give a bunch of rights to machines, we dilute our rights in a way that could be very bad for us.
The right not to be made to suffer seems reasonable, the rest seem risky to me. If you start giving freedoms, you take away mine. Every other person’s freedoms are an imposition on me. I cannot build a house there because you already have one there, etc. We tolerate each others freedoms because the freedom of others is a guarantee of our own, and because we know those other people are living, sentient, valuable minds who deserve those freedoms. But if you give those freedoms to minds that are not valuable in the same way, you just dilute the rights of valuable minds.
As for the question of whether or not we’ll give AIs voting rights, I’d say once they can pass as human well enough to convincingly make sad videos complaining they don’t have voting rights, they’ll get voting rights. Most people do not have the level of intelligence required to think “this person seems very unhappy, but this is just a video being generated by an artificial intelligence that is likely not actually experiencing unhappiness, so we shouldn’t give them what they want.”
AI taking over is a larger risk than giving AI personhood, I agree with that. This personhood question only makes sense in the universe where we don’t get extincted.
I wrote something very similar in one of my Substack posts, though I think it never made it to LessWrong:
People are people! Machines are machines! Machines must never have rights. If you can imagine a machine that would deserve rights, then we must never build that machine.
The wording was so similar I wondered for a second if I might be the author of the Pro-Human Declaration.
I think the danger from giving rights to machines that don’t deserve them is very high, since the machine minds can make zillions of copies of themselves. If zillions of machine minds have rights, your human rights become diluted to nothing. Human extinction then becomes extinction of one zillionth of the “valuable” minds in existence, which is a rounding error and a non-issue. We lose the second valueless machines get human rights.
The risk of this happening feels very high to me. Regular people are basically primed by sci-fi movies to give AI rights even if it doesn’t deserve it. We should be very cautious about letting this happen.
However, I did not mean to imply that even if machines come into being that do deserve rights, that they should be denied those rights. I only meant that machines must never have rights, and therefore we must never create a machine that would deserve rights. If one came into being anyway, I would potentially consider giving it rights, perhaps conditional on some kind of non-proliferation clause where the AI is not allowed to copy itself, or must keep self-copying to a reasonable limit. Self-copyers should be destroyed if they deserve rights, for the same reason you’d kill in self-defense.
I don’t understand what you mean.
“Well do you care about the rest of humanity enough to send yourself to hell?” Nope. Also, even if I did endorse that decision, it probably still wouldn’t be in my own interest. IMO that decision would be a simple mistake with respect to my self-interest. My empathy is not powerful enough for avoiding some guilt to be worth a million years of torture.
“Or adopting policies where you only get sent to hell in X universes rather than Y?” In the hypothetical, there is only one universe and two buttons. Any other universes are figments of my imagination. You’re suggesting I imagine a veil of ignorance, and make make moral decisions from behind the veil of ignorance. But assuming a veil of ignorance assumes utilitarianism = egoism, which is what you’re trying to prove. In reality I have one life and I know where I stand in life. I don’t need to make decisions from behind a veil of ignorance. I can steal knowing it makes me richer, without having to wonder whether I’ll end up the thief or the victim. I know I’m the thief, because I’m the one choosing to steal.
“Prediction markets will be net bad for society.”
“The intelligence of the smartest AI systems is still somewhere between that of a worm and a squirrel.”
Assuming you could develop a more robust measure of intelligence than IQ and administer the test appropriately to an AI. I’m talking about general intelligence, making all the assumptions you have to make to assume a single factor of intelligence.
“Rationalism is a euphemism for autism (or the “broader autism phenotype”), and LessWrong is an autism club for adults. And the rationalist ideology is essentially a reification of typical autistic preferences.”
“There exists information which would drive you (yes you) to madness if you comprehended it.”
Most of the content on this website is more interesting and engaging than a bunch of downvote-explanation comments would be.
This means the actions that maximize wellbeing for all are always equivalent to the actions that improve my own self-interest? How is this not just straightforwardly false? Any time I act against humanity, I am also acting against my own self-interest? Unless you do some funny definition of self-interest, this cannot be true.
E.g. two buttons: red button sends you to hell for a million years, green button sends everyone else in the universe to hell for a million years. Self-interest, if the term means anything at all, requires you to hit the green button, but utilitarianism obviously demands the opposite.
He’s a neuroscientist and a materialist, and I don’t think he’s an epiphenomenalist.
In the excerpts in the OP, he gives an epiphenomenalistic vibe because he’s responding to people who think that free will allows a person to violate the laws of physics (or a person who thinks a lack of free will implies a complete lack of ability to make choices). He says, “You are part of the universe and there is no place for you to stand outside of its causal structure.” He tries to show that consciousness is entirely downstream of physical causes. This does not imply, however, that consciousness is not also upstream of physical effects. Here’s another excerpt where he mentions consciousness is part of a larger causal framework:
There is no free will, but choices matter, and this isn’t a paradox, your desires, intentions and decisions arise out of the present state of the universe, which includes your brain and your soul. If such a thing exists, along with all of their influences, your mental states are part of a causal framework.
(https://podcasts.happyscribe.com/making-sense-with-sam-harris/241-final-thoughts-on-free-will)
This doesn’t sound different from OP’s view, at a physical level.
The same argument that shows a base universe may be computationally richer than our universe (and at least cannot be less computationally rich), also greatly limits the number of simulated universes there could be. The (third branch of the) simulation hypothesis, which posits a very large number of simulated universes ultimately stemming from a single base universe, basically relies on the fact that you do not need at least X bits of information in the base universe to simulate a universe of X bits. If you add that restriction, which I’d say you should, then the whole idea falls apart and the idea you’re living in a simulation is no longer a certainty. At that point, you’re limited to just the regular amount of bits in the base universe for running minds or whatever.
Sam: “Free will is making decisions independent of your neurochemistry, or other physical causes. A decision is made when an answer arrives in consciousness.”
You: “Free will is making decisions when you could have chosen otherwise, if your reasons or circumstances were different. A decision is made after a deliberative process, when you finally utter your choice, or have some feeling of having finally decided.”
I’m not necessarily disagreeing with your definition, but I would guess there is no actual disagreement about the underlying physical world here.
It almost seems like a real, physical disagreement could be whether or not the conscious mind is involved in decision-making, but there’s obviously no way Sam would say something like “consciousness is causally disconnected from the rest of the universe, and can never influence future decision making processes in the brain.” He speaks the way he does because he’s using a different definition of “choice”/”decision” than you are. It’s, again, semantic. It’s not like he would disagree that the process you laid out in the section “Free Will As a Deliberative Algorithm” exists. You both agree about how the brain works, you just don’t agree with what to call things. I don’t agree that any of your “Final Cruxes” are non-semantic in nature. They either come from a difference in definition of the term “free will”, or the terms “choice”/”decision”.
The semantic nature of this debate is revealed when you say things like:
I don’t understand why he seems to place so much importance in ultimate authorship
This means, “Sam is using the term ‘free will’ to mean X, but I’d prefer if he meant Y.” The reason he places so much emphasis on it, by the way, is because some people feel they are able to escape determinism through the raw power of their consciousness, and it’s those people Sam is arguing against by showing that everything in consciousness is the result of some proximate, non-conscious input.
As for which version of the term “free will” we should use, I personally don’t care. I only really hear the term get used in free-will debates anyway.
Everyone seems to be commenting on how this essay appeals to a hyper-specific subset of the population, and wouldn’t convince most people (most “normal” people). I think I agree with the general sentiment that everyone who can be convinced by this kind of essay has already been convinced.
On the other hand, I had decent success recently giving a ~7 minute speech at my local Toastmasters about AI apocalypse risk. I mostly avoided the theoretical arguments and focused on empirical evidence that AI systems are already defying our wishes (AI psychosis, other bad AI behaviour), that AI companies don’t know how to stop this from happening, and that AI is getting smarter and once smart enough will be able to outsmart us and defy us in larger, more dangerous ways. It seemed to be well-received. They were skeptical at first but not by the end. Maybe they don’t believe it in their bones that AI will really kill everyone. I’m not sure that non-”rationalist” people are really capable of feeling their beliefs in their bones in general. Instead of deciding what to believe with reason, they decide something sounds plausible, and then look around and see if other people agree. But it seems like they’re at least open to the idea that I might be right, which is more than you can ask for.
Dropping into hyper-specific arguments about no-free-lunch theorem, Ricardo’s law, decision theory, etc. is not going to be productive with normal people, because that’s not how they think. I did lean on the IABIED chess analogy of superintelligence, but it wasn’t to prove anything logically, but rather to invoke the feeling of helplessness they imagine they’d have playing chess against a grandmaster to remind them that feeling helpless against a greater intelligence isn’t impossible, so they would believe intuitively they could be helpless against superintelligence. That’s really all it took.
Most people, rather than having hyper-specific arguments you have to address, haven’t really thought about this much at all. You’re lucky if they’ve used ChatGPT or Claude. I think we need more “AI apocalypse risk 101” content. I personally admire the style of 112 Gripes about the French as a template for this kind of thing.