Can you explain and/or link this analysis of transparent Newcomb? It looks very wrong to me.
It’s only wrong if you are the kind of person who doesn’t like getting $1,000,000.
If only all our knowledge of our trading partners and environment was as reliable as ‘fundamentally included in the very nature of the problem specification’. You have to think a lot harder when you are only kind of confident and know the limits of your own mind reading capabilities.
If only all our knowledge of our trading partners and environment was as reliable as ‘fundamentally included in the very nature of the problem specification’.
If you’re going to make that kind of argument, you’re dismissing pretty much all LW-style thought experiments.
If you’re going to make that kind of argument, you’re dismissing pretty much all LW-style thought experiments.
I think you’re reading in an argument that isn’t there. I was explaining the most common reason why human intuitions fail so blatantly when encountering transparent Newcomb. If anything that is more reason to formalise it as a thought experiment.
It’s only wrong if you are the kind of person who doesn’t like getting $1,000,000.
If only all our knowledge of our trading partners and environment was as reliable as ‘fundamentally included in the very nature of the problem specification’. You have to think a lot harder when you are only kind of confident and know the limits of your own mind reading capabilities.
If you’re going to make that kind of argument, you’re dismissing pretty much all LW-style thought experiments.
I think you’re reading in an argument that isn’t there. I was explaining the most common reason why human intuitions fail so blatantly when encountering transparent Newcomb. If anything that is more reason to formalise it as a thought experiment.