We can test whether consciousness causes collapse once we can simulate a person on a quantum computer, so it’s not an interpretation by definition. See chapter 8 of Quantum Theory as a Universal Physical Theory.
We can test whether consciousness causes collapse once we can simulate a person on a quantum computer, so it’s not an interpretation by definition.
Assuming that the simulated person actually has consciousness and isn’t a zombie. That is a very big assumption, and if you ever
did perform the experiment , the CCC enthusiasts would fight it on those grounds.
They are making what some consider to be a philosophical mistake and others don’t. The falsifiability of MWI isn’t a scientific fact if it depends on a contentious philosophical claim. By the way, computational zombies, functional duplicates of humans which lack consciousness, can’t be argued against using the same arguments that exclude p-zombies.
No belief-theoretic mistake is considered one by those who make it. We should be thinking about what’s true, not what people think, to find out whether the premise is false. If your functional duplicate says it’s conscious, that’s going to be for the same reasons you would, and you couldn’t deduce your consciousness from talking about it any more than you could deduce the duplicate’s consciousness from its talking about it. As the link explains.
We can test whether consciousness causes collapse once we can simulate a person on a quantum computer, so it’s not an interpretation by definition. See chapter 8 of Quantum Theory as a Universal Physical Theory.
Assuming that the simulated person actually has consciousness and isn’t a zombie. That is a very big assumption, and if you ever did perform the experiment , the CCC enthusiasts would fight it on those grounds.
They’re making a philosophical mistake and just because many would make it doesn’t mean MWI isn’t falsifiable.
They are making what some consider to be a philosophical mistake and others don’t. The falsifiability of MWI isn’t a scientific fact if it depends on a contentious philosophical claim. By the way, computational zombies, functional duplicates of humans which lack consciousness, can’t be argued against using the same arguments that exclude p-zombies.
No belief-theoretic mistake is considered one by those who make it. We should be thinking about what’s true, not what people think, to find out whether the premise is false. If your functional duplicate says it’s conscious, that’s going to be for the same reasons you would, and you couldn’t deduce your consciousness from talking about it any more than you could deduce the duplicate’s consciousness from its talking about it. As the link explains.