On the subject of patterns, there’s an old joke: Suppose you replace a human’s neurons, one by one, with techno-doodads that have precisely the same input-output patterns. As you replace, you ask the subject, about once a minute, “Do you still have qualia?” Now, what do you do if he starts saying “No”?
Your claim seems to require more knowledge about biology than most people actually have. Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How do we tell at what point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems?
I think you may need to repeat the fact that this is a joke at the bottom, since you already have two replies that didn’t get it …
The punchline seemed too much like what people actually say for it to be sufficiently absurd to qualify as a joke. This related anecdote explains why it would seem funny to Rolf.
Not by definition, but by consequence of the materialist belief, that the neurons are everything there is to a mind. There may be excellent reasons for that belief, but the experiment, if carried out, would be an empirical test of it, not a joke.
You’d have to ask someone who believes in such a supernatural influence, where it intervenes. You’d also have to ask the materialist how they determined that they were replacing neurons with physically equivalent devices. It’s difficult to determine the input-output behaviour of a single component when it’s embedded in a complex machine whose overall operation one knows very little about, and cutting it out to analyse it might destroy its ablity to respond to the supernatural forces.
As context to these remarks, I’ve read some of the discussion between Rolf Andreassen and John C Wright on the latter’s blog, and whatever I might think of supernatural stuff, I must agree with Wright that Rolf is persistently smuggling his materialist assumptions into his arguments and then pulling them out as the conclusion.
On the subject of patterns, there’s an old joke: Suppose you replace a human’s neurons, one by one, with techno-doodads that have precisely the same input-output patterns. As you replace, you ask the subject, about once a minute, “Do you still have qualia?” Now, what do you do if he starts saying “No”?
Check their stream of consciousness to see if they’re trolling. If they’re not, YOU TURNED INTO A CAT!!
Your claim seems to require more knowledge about biology than most people actually have. Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How do we tell at what point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems?
That’s what makes it a joke.
I think you may need to repeat the fact that this is a joke at the bottom, since you already have two replies that didn’t get it …
The punchline seemed too much like what people actually say for it to be sufficiently absurd to qualify as a joke. This related anecdote explains why it would seem funny to Rolf.
so by definition, he would have said “No” with neurons as well. Slap him for scaring you.
Not by definition, but by consequence of the materialist belief, that the neurons are everything there is to a mind. There may be excellent reasons for that belief, but the experiment, if carried out, would be an empirical test of it, not a joke.
Hence Eliezer’s response.
Weeell, if there was some supernatural influence wouldn’t it need to show itself, somehow, in neuron input/output patterns?
You’d have to ask someone who believes in such a supernatural influence, where it intervenes. You’d also have to ask the materialist how they determined that they were replacing neurons with physically equivalent devices. It’s difficult to determine the input-output behaviour of a single component when it’s embedded in a complex machine whose overall operation one knows very little about, and cutting it out to analyse it might destroy its ablity to respond to the supernatural forces.
As context to these remarks, I’ve read some of the discussion between Rolf Andreassen and John C Wright on the latter’s blog, and whatever I might think of supernatural stuff, I must agree with Wright that Rolf is persistently smuggling his materialist assumptions into his arguments and then pulling them out as the conclusion.
By what definition outputting “No” when input “Do you still have qualia?” is not an input-output pattern?
That’s the joke, yes.