By “equally” I meant:
“in the same ways (and to the same degree)”.
If you actually believe in florid many worlds, you would end up pretty insuoucient, since everything possible happens, and nothing can be avoided.
By “equally” I meant:
“in the same ways (and to the same degree)”.
If you actually believe in florid many worlds, you would end up pretty insuoucient, since everything possible happens, and nothing can be avoided.
Same way you know anything. “Sharp valued” and “classical” have meanings, which cash out in expected experience.
This question doesn’t really make sense from a naturalistic perspective, because there isn’t any causal mechanism that could be responsible for the difference between “a version of me that exists at 3pm tomorrow, whose experiences I should anticipate experiencing” and “an exact physical copy of me that exists at 3pm tomorrow, whose experiences I shouldn’t anticipate experiencing”.
There is, and its multi-way splitting, whether through copying or many worlds branching. The present you can’t anticipate having all their experiences, because experience is experienced one-at-a-time. They can all look back at their memories, and conclude that they were you, but you can’t simply reverse that and conclude that you will be them , because the set-up is asymmetrical.
Scenario 1 is crazy talk, and it’s not the scenario I’m talking about. When I say “You should anticipate having both experiences”, I mean it in the sense of Scenario 2.
Scenario 2: “Two separate screens.” My stream of consciousness continues from Rob-x to Rob-y, and it also continues from Rob-x to Rob-z. Or, equivalently: Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels exactly as though he was just Rob-x (since each of these slightly different people has all the memories, personality traits, etc. of Rob-x — just as though they’d stepped through a doorway).
But that isn’t an experience. It’s two experiences. You will not have an experience of having two experiences. Two experiences will experience having been one person.
If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?
Yeah.
Are you going to care about 1000 different copies equally?
ETA.
The particular* brain states* look no different in the teleporter case than if I’d stepped through a door; so if there’s something that makes the post-teleporter Rob “not me” while also making the post-doorway Rob “me”, then it must lie outside the brain states, a Cartesian Ghost.
There’s another option: door-Rob has physical continuity. There’s an analogy with the identity-over-time of physical objects: if someone destroyed the Mona Lisa, and created an atom-by-atom duplicate some time later, the duplicate would not be considered the same entity (numerical identity).
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?
The claim that humans are at least TM’s is quite different to the claim that humans are at most TM’s. Only the second is computationalism.
Meanwhile the many-worlds interpretation suffers from the problem that it is hard to bridge to experience,
Operationally, it’s straightforward: you keep “erasing the part of the (alleged) wavefunction that is inconsistent with my indexical observations, and then re-normalizing the wavefunction”...all the time murmering under your breath “this is not collapse..this is not collapse”.
(Lubos Motl is quoted making a similar comment here https://www.lesswrong.com/posts/2D9s6kpegDQtrueBE/multiple-worlds-one-universal-wave-function?commentId=8CXRntS3JkLbBaasx)
That claim is unjustified and unjustifiable
Nothing complex is a black box , because it has components, which can potentially be understood.
Nothing artificial is a black box to the person who built it.
An LLM is , of course, complex and artificial.
Everything is fundamentally a black box until proven otherwise.
What justifies that claim?
Our ability to imagine systems behaving in ways that are 100% predictable and our ability to test systems so as to ensure that they behave predictably
I wasn’t arguing on that basis.
every particle interaction creates n parallel universes which never physically interfere with each other”
Although a fairly standard way of explaining MWI, this is an example of conflating coherence and decoherence. To get branches that never interact with each other again, you need decoherence, but decoherence is a complex dynamical process..it takes some time...so it is not going to occur once per elementary interaction. It’s reasonable to suppose that elementary interactions produce coherent superpositions, on the other hand, but these are not mutually isolated “worlds”. And we have fairly strong evidence for them.. quantum computing relies on complex coherent superpositions....so any idea that all superpositions just automatically and instantly decohere must be rejected.
People keep coming up with derivations, and other people keep coming up with criticisms of them, which is why people keep coming up with new ones.
I don’t think this is correct, either (although it’s closer). You can’t build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate neutral.
Meaning that a strong version of computational substrate independence , where any substrate will do, is false? Maybe, but I was arguing against hypothetical, that “the substrate independence of computation implies the substrate independence of consciousness”, not *for* the antecedent, the substrate independence of computation.
What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it’s possible to integrate a function using pebbles, but it’s not possible to do it using the same computation as the ball-and-disk integrator uses—the pebbles system will perform a very different computation to obtain the same result.
I don’t see the relevance.
So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn’t follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.
OK. A crappy computational emulation might not be conscious, because it’s crappy. It still doesn’t follow that a good emulation is necessarily conscious. You’re just pointing out another possible defeater.
This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:
Which argument? Are you saying that a good enough emulation is necessarily conscious?
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.” Note that this isn’t “I upload a brain” (which doesn’t guarantee that the same algorithm is run)
If it’s detailed enough, it’s guaranteed to. That’s what “enough” means
but rather “here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected”.
Ok...that might prove the substrate independence of computation, which I wasn’t arguing against. Past that, I don’t see your point
The result (at least partially) of a particular physical substrate. Physicalism and computationalism are both not-dualism , but they are not the same as each other.
The Church-Turing thesis gives us the “substrate independence principle”. In principle, AI could be conscious.
The C-T thesis gives you the substrate independence of computation. To get to the substrate independence of consciousness, you need the further premise that the performance of certain computations is sufficient for consciousness, including qualia. This is, of course, not known.
MW has to show that decoherence is a natural consequence, which is the same thing. It can’t be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.
What does highly sensitive mean? In classical physics, an observer can produce an energy output much greater than the energy input of the observation. ,but no splitting is implied. In bare Everettian theory, an observer becomes entangled with the coherent superposition they are observing, and goes into a coherent superposition themself ..so no decoherentsplitting is implied. You still haven’t said where and the initial decoherent splitting occurs.
Bohmian mechanics adds hidden variables. why would it be simpler?
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do
And vice versa. You can do unnecessary calculation under any interpretation, so that’s an uninteresting observation.
The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it.… differently. In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.
Copenhagen has to have rules for when measurements occur and what basis they occur in
This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse—it’s not the case that anything “is” a wave function. An interpretation of QM doesn’t have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.
Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
You’re saying that if you have decoherent splitting of an observer, that leads to more decoherent splitting. But where does the initial decoherent splitting come from?
We don’t have to regard basis as objective, ITFP.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
It’s not the existence, it’s the lack of interaction/interference.