Suppose we rule out pure CDT. That still leaves “whatever the right DT is (even if it’s something like FDT/UDT), if you actually run the math on it, it says that rewarding people after the fact for one-time actions provides practically zero incentives (if people means pre-singularity humans)”. I don’t see how we can confidently rule this out.
Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.
Suppose we rule out pure CDT. That still leaves “whatever the right DT is (even if it’s something like FDT/UDT), if you actually run the math on it, it says that rewarding people after the fact for one-time actions provides practically zero incentives (if people means pre-singularity humans)”. I don’t see how we can confidently rule this out.
Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.