This actually makes a huge difference, I feel kind of bad for not doubting this factor enough in the model.
I think I’m willing to accept the fact that under deliberation, there’s at least a 1% chance that the “rational selfish agent” would accept a hyperbolic discount rate of 10^-13 or lower. That would make a pretty huge difference.
The total expected QALYs is now 100M, compared to 420. It’s kind of alarming how sensitive the result is to something that seems so minor. I guess that indicates I should be writing down the confidence intervals more than the means, and should add in model uncertainty and other such considerations.
That said, here are some reasons why I think temporal discounting is not as useless as his post may suggest.
One difference between the model and Carl’s post (I think), is that this model is trying to be specific to a “selfish individual”, which is quite different from utility from the perspective of a utilitarian or policy analyst.
Some of this may wind up in some vagueness on what it means to be a “rational selfish agent.” The way I think about it, people change over time, so if you value your near term happiness because of your soon-future’s similarity to yourself, you may value your long term happiness a bit less so.
Another point I’d make in it’s defense, is that even if the hyperbolic discount rate is incredibly small (10^-8), it still makes a very considerable impact.
Of course, one challenge with the experimental support on the subject is that I believe it’s rather devoid of questions involving living millions of years.
The way I think about it, people change over time, so if you value your near term happiness because of your soon-future’s similarity to yourself, you may value your long term happiness a bit less so.
If this is the main reason for time discounting, it doesn’t seem appropriate to extend it into the indefinite future especially when thinking about AGI. For example, once we create superintelligence, it probably wouldn’t be very difficult to stop the kinds of changes that would cause you to value your future self less.
Good point.
This actually makes a huge difference, I feel kind of bad for not doubting this factor enough in the model.
I think I’m willing to accept the fact that under deliberation, there’s at least a 1% chance that the “rational selfish agent” would accept a hyperbolic discount rate of 10^-13 or lower. That would make a pretty huge difference.
Here’s a the resulting forked model:
https://www.getguesstimate.com/models/10594
The total expected QALYs is now 100M, compared to 420. It’s kind of alarming how sensitive the result is to something that seems so minor. I guess that indicates I should be writing down the confidence intervals more than the means, and should add in model uncertainty and other such considerations.
That said, here are some reasons why I think temporal discounting is not as useless as his post may suggest.
One difference between the model and Carl’s post (I think), is that this model is trying to be specific to a “selfish individual”, which is quite different from utility from the perspective of a utilitarian or policy analyst.
Some of this may wind up in some vagueness on what it means to be a “rational selfish agent.” The way I think about it, people change over time, so if you value your near term happiness because of your soon-future’s similarity to yourself, you may value your long term happiness a bit less so.
Another point I’d make in it’s defense, is that even if the hyperbolic discount rate is incredibly small (10^-8), it still makes a very considerable impact.
Of course, one challenge with the experimental support on the subject is that I believe it’s rather devoid of questions involving living millions of years.
If this is the main reason for time discounting, it doesn’t seem appropriate to extend it into the indefinite future especially when thinking about AGI. For example, once we create superintelligence, it probably wouldn’t be very difficult to stop the kinds of changes that would cause you to value your future self less.
Good point.