In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Possibly a good idea (when you put this as a Trolley problem, with the whole of future potential on the other side), but too difficult to implement in a way that gives advantage to future development of FAI (otherwise you just increase existential risk if civilization never recovers, or replay the same race as we face now).
Also, depending on temporal discounting, even a perfect plan that trades current humanity for future FAI with certainty could be incorrect, so we’d prefer to keep present humanity and reject the future FAI. If there’s no discounting, then FAI is the better choice, but we don’t really know.
The single fact that I value a candy today slightly more than I value a candy tomorrow doesn’t make my utility function inconsistent (AFAIK), so it’s not a bias.
In practice, temporal discounting usually arises “naturally” in any case, because we tend to be less sure of events further in the future and so their expected utility is lower.
Possibly a good idea (when you put this as a Trolley problem, with the whole of future potential on the other side), but too difficult to implement in a way that gives advantage to future development of FAI (otherwise you just increase existential risk if civilization never recovers, or replay the same race as we face now).
Very good answer.
Also, depending on temporal discounting, even a perfect plan that trades current humanity for future FAI with certainty could be incorrect, so we’d prefer to keep present humanity and reject the future FAI.
Possibly a good idea (when you put this as a Trolley problem, with the whole of future potential on the other side), but too difficult to implement in a way that gives advantage to future development of FAI (otherwise you just increase existential risk if civilization never recovers, or replay the same race as we face now).
Also, depending on temporal discounting, even a perfect plan that trades current humanity for future FAI with certainty could be incorrect, so we’d prefer to keep present humanity and reject the future FAI. If there’s no discounting, then FAI is the better choice, but we don’t really know.
Upvoted and I mostly agree, but there’s one point I don’t get. I though temporal discounting was considered a bias. Is it not necessarily one?
The single fact that I value a candy today slightly more than I value a candy tomorrow doesn’t make my utility function inconsistent (AFAIK), so it’s not a bias.
In practice, temporal discounting usually arises “naturally” in any case, because we tend to be less sure of events further in the future and so their expected utility is lower.
Very good answer.
Also a good point.