I’d recommend, for each argument, finding someone who makes that argument online, and posting it to skeptics stack exchange. I used to do that years ago and found people were very helpful in doing research and finding good sources on a wide variety of topics.
There’s a non-trivial proof of “P or not P” for a specific P.
All provable statements follow from the axioms, including “P or not P” for any particular P. It’s the same sense as any other statement can be provable.
Note that you can prove “P or not P”.
How does it imply that?
I have intuitions on both sides. The intuition against is that predicting the outcome of a process can be done without having anything isomorphic to individual steps in that process—it seems plausible (or at the very least, possible and coherent) for humans to be predictable, even perfectly, without having something isomorphic to a human. But a perfect predictor would count as an arbitrarily accurate simulation.
The argument that qualia can’t be epiphenomenal.
Causality is different, for one. You in reality has a causal structure where future actions are caused by the state of you in the present + some inputs. You in the simulation has a causal structure where actions are caused by the simulator, to some extent.
I’m not really assuming that. My question is if there’s a coherent position where humans are conscious, p-zombie humans are impossible, but simulations can be high fidelity yet not conscious.
I’m not asking if it’s true, just whether the standard argument against p-zombies rules this out as well.
But obviously you as a simulation is different in some aspects from you in reality. It’s not obvious that the argument caries over.
Does the anti-p-zombie argument imply you can’t simulate humans past some level of fidelity without producing qualia/consciousness?
Or is there a coherent position whereby p-zombies are impossible but arbitrarily accurate simulations that aren’t conscious are possible?
Stop by your local college, locate the relevant department, and ask around.
Poker players do this sometimes, see e.g. https://lasvegassun.com/news/2016/jun/26/buying-in-whats-in-it-for-pokers-big-money-backers/
Luddites and communist movements in countries that didn’t adopt communism come to mind
Looking at my own experience, the thing that motivated me to do things likely to fail is the expectation of getting other benefits, even if it failed. One such thing is “experience”, but it could also be “it’ll be fun” or “attempting will give you status even if you fail” or any number of other things.
Or, if there is feedback after relatively little effort (you find out after the first few chapters if people like it).
There’s just something about “work hard for an extended period of time with no feedback until you find out if you won, which is a binary event with low odds” that turns people off, I guess.
I identified one paper, and it cites another that also claims this is flawed. Don’t see a reason to believe the original paper over those
Here’s a paper claiming to identify the error. This is enough, I’m convinced the original paper is just mistaken
https://motls.blogspot.com/2018/09/frauchiger-renner-qm-is-inconsistent.html calls BS, now we just need Scott A to do the same and I’ll be convinced
Feels like there has to be something wrong with the paper. I don’t have the knowledge to analyze it myself, but I read through the paper until the methods section and they don’t discuss much beyond the math. It’s unclear to me how they’re arriving at a conclusion where different things happened from different perspectives, and particularly what percent of the time that would happen.
If someone familiar with the math could explain what the probability of each step is I think it could be a lot simpler to follow.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.
Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?