I’m not an expert in any of the points you talk about, nevertheless I’ll give my unconfident opinions after a quick read:
Casper
If you add to the physical laws code that says “behave like with Casper”, you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.
FDT
This is intuitively very obvious. If you know all the relevant facts about how the world is, and one act gives you more rewards than another act, you should take the first action. But MacAskill shows that FDT violates that constraint over and over again.
The summary of my understanding of the specific argument that follows the quote:
Consider an FDT agent.
Consider an almost infallible predictor of the FDT agent.
The predictor says the FDT agent will behave differently than an FDT agent.
What I logically conclude from these premises is that, although improbable given only (2), (3) is sufficient information to conclude that the predictor failed. So I expect that if the FDT agent actually had some modeling of the accuracy of the predictor, instead of a number specified by fiat, it would deduce that the predictor is not accurate, and so avoid the bomb because this does not make it take inconsistent decisions with its accurately simulated copies.
I think the intuitive appeal of the counterexample resides in picking such a “dumb” FDT agent, where your brain, which is not so constrained, sees immediately the smart think to do and retorts “ahah, how dumb”. I think if you remove this emotional layer, then the FDT behavior looks like the only thing possible: if you choose an algorithm, any hardware assumed to run the algorithm accurately will run the algorithm. If there’s an arbitrarily small possibility of mistake, it’s still convenient to make the algorithm optimal for when it’s run correctly.
Consciousness
I’ve just skimmed this part, but it seems to me that you provide arguments and evidence about consciousness as wakefulness or similar, while Yudkowsky is talking about the more restricted and elusive concept of self-awareness.
I’ve written down this comment before arriving at the part where you cite Yudkowsky making the same counterargument, so my comment is only based on what evidence you decided to mention explicitly.
Overall impression
Your situation is symmetric: if you find yourself repeatedly being very confident about someone not knowing what they are saying, while this person is a highly regarded intellectual, maybe you are overconfident and wrong! I consider this a difficult dilemma to be in. Yudkowsky wrote a book about this problem, Inadequate Equilibria, so it’s one step ahead of you on the meta.
//If you add to the physical laws code that says “behave like with Casper”, you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.//
No, you haven’t reimplemented Casper, you’ve just copied his physical effects. There is no Casper, and Casper’s consciousness doesn’t exist.
Your description of the FDT stuff isn’t what I argued.
//I’ve just skimmed this part, but it seems to me that you provide arguments and evidence about consciousness as wakefulness or similar, while Yudkowsky is talking about the more restricted and elusive concept of self-awareness. //
Both Yudkowsky and I are talking about having experiences, as he’s been explicit about in various places.
//Your situation is symmetric: if you find yourself repeatedly being very confident about someone not knowing what they are saying, while this person is a highly regarded intellectual, maybe you are overconfident and wrong! I consider this a difficult dilemma to be in. Yudkowsky wrote a book about this problem, Inadequate Equilibria, so it’s one step ahead of you on the meta.//
I don’t talk about the huge range of topics Yudkowsky does. I don’t have super confident views on any topic that is controvsial among the experts—but Yudkowsky’s views aren’t, they mostly just rest on basic errors.
I’m not an expert in any of the points you talk about, nevertheless I’ll give my unconfident opinions after a quick read:
Casper
If you add to the physical laws code that says “behave like with Casper”, you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.
FDT
The summary of my understanding of the specific argument that follows the quote:
Consider an FDT agent.
Consider an almost infallible predictor of the FDT agent.
The predictor says the FDT agent will behave differently than an FDT agent.
What I logically conclude from these premises is that, although improbable given only (2), (3) is sufficient information to conclude that the predictor failed. So I expect that if the FDT agent actually had some modeling of the accuracy of the predictor, instead of a number specified by fiat, it would deduce that the predictor is not accurate, and so avoid the bomb because this does not make it take inconsistent decisions with its accurately simulated copies.
I think the intuitive appeal of the counterexample resides in picking such a “dumb” FDT agent, where your brain, which is not so constrained, sees immediately the smart think to do and retorts “ahah, how dumb”. I think if you remove this emotional layer, then the FDT behavior looks like the only thing possible: if you choose an algorithm, any hardware assumed to run the algorithm accurately will run the algorithm. If there’s an arbitrarily small possibility of mistake, it’s still convenient to make the algorithm optimal for when it’s run correctly.
Consciousness
I’ve just skimmed this part, but it seems to me that you provide arguments and evidence about consciousness as wakefulness or similar, while Yudkowsky is talking about the more restricted and elusive concept of self-awareness.
I’ve written down this comment before arriving at the part where you cite Yudkowsky making the same counterargument, so my comment is only based on what evidence you decided to mention explicitly.
Overall impression
Your situation is symmetric: if you find yourself repeatedly being very confident about someone not knowing what they are saying, while this person is a highly regarded intellectual, maybe you are overconfident and wrong! I consider this a difficult dilemma to be in. Yudkowsky wrote a book about this problem, Inadequate Equilibria, so it’s one step ahead of you on the meta.
//If you add to the physical laws code that says “behave like with Casper”, you have re-implemented Casper with one additional layer of indirection. It is then not fair to say this other world does not contain Casper in an equivalent way.//
No, you haven’t reimplemented Casper, you’ve just copied his physical effects. There is no Casper, and Casper’s consciousness doesn’t exist.
Your description of the FDT stuff isn’t what I argued.
//I’ve just skimmed this part, but it seems to me that you provide arguments and evidence about consciousness as wakefulness or similar, while Yudkowsky is talking about the more restricted and elusive concept of self-awareness. //
Both Yudkowsky and I are talking about having experiences, as he’s been explicit about in various places.
//Your situation is symmetric: if you find yourself repeatedly being very confident about someone not knowing what they are saying, while this person is a highly regarded intellectual, maybe you are overconfident and wrong! I consider this a difficult dilemma to be in. Yudkowsky wrote a book about this problem, Inadequate Equilibria, so it’s one step ahead of you on the meta.//
I don’t talk about the huge range of topics Yudkowsky does. I don’t have super confident views on any topic that is controvsial among the experts—but Yudkowsky’s views aren’t, they mostly just rest on basic errors.