Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.
Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.