I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it’s reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!
I know for a fact that at least one human is consciousness (myself) because I can experience it. That’s still the strongest reason to assume it, and it can’t be called into question as you did.
“There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it’s not (much like pain receptors aren’t necessary to feel pain, etc.).”
That’s just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn’t need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.
“On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that’s best at continuing the prompt.”
Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.
“The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules.”
With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don’t have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.
“a meat brain, which, if we look inside, contains no sentience”
Can you really be so sure? Just because we can’t see it yet doesn’t mean it doesn’t exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.
“Of course, the brain claims to be sentient, but that’s only because of how its neurons are connected.”
Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you’re giving the word other meanings.
There’s just no good reason to assume that LaMDA is sentient. Arquitecture is everything, and its arquitecture is just the same as other similar models: it predicts the most likely next word (if I recall correctly). Being sentient involves way more complexity than that, even something as simple as an insect. It claiming that it is sentient might just be that it was mischievously programmed that way, or it just found it was the most likely succession of words. I’ve seen other language models and chatbots claim they were sentient too, though perhaps ironically.
Perhaps as importantly, there’s also no good reason to worry that it is being mistreated, or even that it can be. It has no pain receptors, it can’t be sleep deprived because it doesn’t sleep, can’t be food deprived because it doesn’t need food...
I’m not saying that it is impossible that it is sentient, just that there is no good reason to assume that it is. That plus the fact that it doesn’t seem like it’s being mistreated plus it also seems almost impossible to mistreat, should make us less worried. Anyway we should always play safe and never mistreat any “thing”.
Right then, but my original claim still stands: your main point is, in fact, that it is hard to destroy the world. Like I’ve explained, this doesn’t make any sense (hacking into nuclear codes). If we create an AI better than us at code, I don’t have any doubts that it CAN easily do it, if it WANTS. My only doubt is whether it will want it or not. Not whether it will be capable, because like I said, even a very good human hacker in the future could be capable.
At least the type of AGI that I fear is one capable of Recursive Self-Improvement, which will unavoidably attain enormous capabilities. Not some prosaic non-improving AGI that is only human-level. To doubt whether the latter would have the capability to destroy the world is kinda reasonable, to doubt it about the former is not.
The post is clearly saying “it will take longer than days/weeks/months SO THAT we will likely have time to react”. Both are highly unlikely. It wouldn’t take a proper AGI weeks or months to hack into the nuclear codes of a big power, it would take days or even hours. That gives us no time to react. But the question here isn’t even about time. It’s about something MORE intelligent than us which WILL overpower us if it wants, be it on 1st or 100th try (nothing guarantees we can turn it off after the first failed strike).
Am I extremely sure that an unaligned AGI would cause doom? No. But to be extremely sure of the opposite is just as irrational. For some reason it’s called a risk—it’s something that has a certain probability, and given that we all should agree that that probability is high enough, we all should take the matter extremely seriously regardless of our differences.
Your argument boils down to “destroying the world isn’t easy”. Do you seriously believe this? All it takes is to hack into the codes of one single big nuclear power, thereby triggering mutually assured destruction, thereby triggering nuclear winter and effectively killing us all with radiation over time.
In fact you don’t need AGI to destroy the world. You only need a really good hacker, or a really bad president. In fact we’ve been close about a dozen times, so I hear. If Stanislav Petrov had listened to the computer in 1983 who indicated 100% probability of incoming nuclear strike, the world would have been destroyed. If all 3 officials of the Russian submarine of the Cuban Missile Crisis had agreed to launch what they mistakenly thought would be a nuclear counter strike, the world would have been destroyed. Etc etc.
Of course there are also other easy ways to destroy the world, but this one is enough to invalidate your argument.
“You may notice that the whole argument is based on “it might be impossible”. I agree that it can be the case. But I don’t see how it’s more likely than “it might be possible”.”
I never said anything to the contrary. Are we allowed to discuss things that we’re not sure whether it “might be possible” or not? It seems that you’re against this.
Tomorrow people matter, in terms of leaving them a place in minimally decent conditions. That’s why when you die for a cause, you’re also dying so that tomorrow people can die less and suffer less. But in fact you’re not dying for unborn people—you’re dying for living ones from the future.
But to die to make room for others is simply to die for unborn people. Because them never being born is no tragedy—they never existed, so they never missed anything. But living people actually dying is a tragedy.
And I’m not against the fact that giving live is a great gift. Or should I say, it could be a great gift, if this world was at least acceptable, which it’s far from being. It’s just that not giving it doesn’t hold any negative value, it’s just neutral instead of positive. Whereas taking a life does hold negative value.
It’s as simple as that.
I can see the altruism in dying for a cause. But it’s a leap of faith to claim, from there, that there’s altruism in dying by itself. To die why, to make room for others to get born? Unborn beings don’t exist, they are not moral patients. It would be perfectly fine if no one else was born from now on—in fact it would be better than even 1 single person dying.
Furthermore, if we’re trying to create a technological mature society capable of discovering immortality, perhaps much sooner will it be capable of colonizing other planets. So there are trillions of empty planets to put all the new people before we have to start taking out the old ones.
To die to make room for others just doesn’t make any sense.
“consciousness will go on just fine without either of us specifically being here”
It sure will. But that’s like saying that money will go on just fine if you go bankrupt. I mean, sure, the world will still be full of wealth, but that won’t make you any less poor. Now imagine this happening to everyone inevitably. Sounds really sh*tty to me.
“Btw I’m new to this community,”
To each paragraph:
Totally unfair comparison. Do you really think that immortality and utopia are frivolous goals? So maybe you don’t really believe in cryonics or something. Well, I don’t either. But transhumanism is way more than that. I think that its goals with AI and life extension are all but a joke.
That’s reductive. As an altruist, I care about all other conscious being. Of course maintaining sanity demands some distancing, but that’s that. So I’d say I’m a collectivist. But one person doesn’t substitute the other. Others continuing to live will never make up for those who die. The act of ceasing to exist if of the utmost cruelty and there’s nothing that can compensate that.
I have no idea of what consciousness is scientifically, but morally I’m pretty sure it is valuable. All morality comes from the seeking of well-being for the conscious being. So if there’s any value system, consciousness must be at the center. There’s not much explaining here needed, it’s just that everyone wants to be well—and to be.
Like I said every conscious being wants to exist. It’s just the way we’ve been programmed. All beings matter, myself included. I goddamn want to live, that is the basis of all wants and of all rights. Have I been brainwashed? Religions have been brainwashing people about the exact opposite for millenia, that death is ok, either because we go to heaven according to the West, or because we’ll reincarnate or we’re part of a whole according to the East. So, quite on the contrary, I think I have been de-brainwashed.
An unborn person isn’t a tragedy. A death one is. So it’s much more important to care about the living than the unborn.
If most people are saying that AGI is decade(s) off then we aren’t that far.
As for raising children as best as we can I think that’s just common sense.
I partly agree. It would be horrible if Genghis Khan or Hitler never died. But we could always put them in a really good prison. I just don’t wanna die and I think no minimally decent person deserves to, just so we can get rid of a few psychopaths.
Also we’re talking about immortality not now, but in a technological utopia, since only such could produce it. So the dynamics would be different.
As for fresh new perspectives, in this post I propose selective memory deletion with immortality. So that would contribute to that. Even then, getting fresh new perspectives is pretty good, but nowhere near being worth the ceasing of trillions of consciousnesses.
“You are a clone of your dead childhood self.”
Yes, that’s a typical Buddhist-like statement, that we die and are reborn each instant. But I think it’s just incorrect—my childhood self never died. He’s alive right now, here. When I die the biological death, then I will stop existing. It’s as simple as that. Yet I feel like Buddhists, or Eastern religion in general, does this and other mental gymnastics to comfort people.
“So you either stick with modernism (that transhumanism is the one, special ideology immune from humanity’s tragic need to self-sedate), or dive into the void”
There are self-sedating transhumanists, for sure. Like, if you think there isn’t a relevant probability that immortality just won’t work, or if you’re optimistic about the AI control problem, you’re definitely a self-sedating transhumanist. I try to not be one as much as possible, but maybe I am in some areas—no one’s perfect.
But it’s pretty clear that there’s a big difference between transhumanism and religions. The former relies on science to propose solutions to our problems, while the later is based on the teachings of prophets, people who thought that their personal intuitions were the absolute truth. And, in terms of self-sedating ideas, if transhumanism is a small grain of Valium, religion is a big fat tab.
“It’s hard to say anything about reality when the only thing you know is that you’re high af all the time.”
I agree. I claim uncertainty on all my claims.
“Every day the same sun rises, yet it’s a different day. You aren’t the sun, you’re the day.
Imagine droplets of water trapped in a cup, then poured back into the ocean. Water is consciousness, your mind is the cup.”
Yeah, yeah, yeah, I know, I know, I’ve heard the story a thousand times. There’s only one indivisible self/consciousness/being, we’re just an instance of it. Well, you can believe that if you want, I don’t have the scientific evidence to disprove it. But neither have you the evidence to prove it, so I can also disbelieve it. My intuition clearly disbelieves it. When I die biologically it will be blackout. It’s cruel af.
“Imagine if their reign extended infinitely. But for the grace of Death might we soon unlock Immortality.”
Either too deep or I’m too dumb, didn’t quite get it. Please explain less poetically.
Still, that could all happen with philosophical zombies. A computer agent (AI) doesn’t sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won’t ever be alive, even if they ever come to emulate the world perfectly. At best they’ll produce p-zombies.
“I’m feeling enthusiastic to try to make it work out, instead of being afraid that it won’t.”
Well, for someone who’s accusing me of emotionally still defending a wrong mainstream norm (deathism), you’re also doing it yourself by espousing empty positivism. Is it honest to feel enthusiastic about something when your probabilities are grim? The probabilities should come first, not how you feel about it.
“It’s true that I lack the gear-level model explainig how it’s possible for me to exist for quadrillion years.”
Well I do have one to prove the opposite: the brain is finite, and as time tends to infinite so do memories, and it might be impossible to trim memories like we do in a computer without destroying the self.
“For every argument “what if it’s impossible to do x and x is required to exist for quadrillion years” I can automatically construct counter arguments like “what if it’s actually possible to do x” or “what if x is not required”.”
That’s fine! Are we allowed to have different opinions?
“How do you manage to get 70-80% confidence level here? This sounds overconfident to me.”
Could be. I’ll admit that it’s a prediction based more on intuition than reasoning, so it’s not of the highest value anyway.
It does ring true to me a bit. How could it not, when one cannot imagine a way to exist forever with sanity? Have you ever stopped to imagine, just relying on your intuition, what would be like to live for a quadrillion years? I’m not talking about a cute few thousand like most people imagine when we talk about immortality. I’m talking about proper gazillions, so to speak. Doesn’t it scare the sh*t out of you? Just like Valentine says in his comment, it’s curious how very few transhumanists have ever stopped to stare at this abyss.
On the other hand I don’t think anyone hates death more than me. It truly makes me utterly depressed and hopeless. It’s just that I don’t see any possible alternative to it. That’s why I’m pessimistic about the matter—both my intuition and reasoning really point to the idea that it’s technological impossible for any conscious being to exist for a quadrillion years, although not to 100% certainty. Maybe 70-80%.
The ideal situation was that we lived forever but only ever remembered a short amount of time, so that we would always feel “fresh” (i.e. not go totally insane). I’m just not sure if that’s possible.
“Whatever posttranshuman creature inherits the ghost of your body in a thousand years won’t be “you” in any sense beyond the pettiest interpretation of ego as “continuous memory”″
I used to buy into that Buddhist perspective, but I no longer do. I think that’s a sedative, like all religions. Though I will admit that I still meditate, because I still hope to find out that I’m wrong. I hope I do, but I don’t have a lot of hope. My reason and intuition are clear in telling me that the self is extremely valuable, both mine and that of all other conscious beings, and death is a mistake.
Unless you mean to say that they will only be a clone of me. Then you’re right, a clone of me is not me at all, even if it feels exactly like me. But then we would have just failed at life extension anyway. Who’s interested in getting an immortal clone? People are interested in living forever themselves, not someone else. At least if they’re being honest.
“Your offspring are as much “you” as that thousand year ego projection. ”
I’ve been alive for 30 years—not much, I admit, but I still feel as much like me as in the first day that I can remember. I suspect that as long as the brain remains healthy, that will remain so. But I never ever felt “me” in any other conscious being. Again, Buddhist projection. Sedative. Each conscious being is irreplaceable.
“if one might make a conscious being out of Silicon but not out of a Turing machine”
I also doubt that btw.
“what happens when you run the laws of physics on a Turing machine and have simulated humans arise”
Is physics computable? That’s an open question.
And more importantly, there’s no guarantee that the laws of physics would necessarily generate conscious beings.
Even if it did, could be p-zombies.
“What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.”
True. But as long as you have someone, it’s no longer an illusion. It’s like, if you stimulate your pleasure centers with an electrode, and you say “hmmm that feels good”, was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. “I think, therefore I am.” Experience is the basis of all fact.
Can we really separate them? I’m sure that the limitations of consciousness (software) have a physical base (hardware). I’m sure we could find the physical correlates of “failure to keep up with experience”, as well as we could find the physical correlates of why someone who doesn’t sleep for a few days starts failing to keep up with experience as well.
It all translates down to hardware at the end.
But anyway I’ll say again that I admitted it was speculative and not the best example.
“There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside.”
Modern computers could theoretically do anything that a human does, except experience it. I can’t draw a line around the part of my brain responsible for it because there is probably none, it’s all of it. Even though I’m no neurologist. But from the little I know the brain has an integrated architecture.
Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine—or at least not only.
“The outside only works in terms of information.”
Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).
“Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.”
Now we’ve arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
As for being made of atoms or not, well, information isn’t, either. But it’s expressed by atoms, and so is consciousness.
Perhaps our main difference is that you seem to believe in computationalism, while I don’t. I think consciousness is something fundamentally different from a computer program or any other kind of information. It’s experience, which is beyond information.
I think it is factually correct that we get Alzheimer’s and dementia at old age because the brain gets worn out. Whether it is because of failing to keep up with all the memory accumulation could be more speculative. So I admit that I shouldn’t have made that claim.
But the brain gets worn out from what? Doing its job. And what’s its job...?
Anyway, I think it would be more productive to at least present an explanation in a couple of lines rather than only saying that I’m wrong.