Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility.
Yes, my mainline no-superintelligence-by-2100 scenario is that the trend toward a better world continues to 2100.
You’re welcome to set the baseline number to a negative, or tweak the numbers however you want to reflect any probability of a non-ASI existential disaster happening before 2100. I doubt it’ll affect the conclusion.
To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics
Ah ok, the crux of our disagreement is how much you value the paperclipper type scenario that I’d consider a very bad outcome. If you think that outcome is good then yeah, that licenses you in this formula to conclude that rushing toward AI is good.
Yes, my mainline no-superintelligence-by-2100 scenario is that the trend toward a better world continues to 2100.
You’re welcome to set the baseline number to a negative, or tweak the numbers however you want to reflect any probability of a non-ASI existential disaster happening before 2100. I doubt it’ll affect the conclusion.
Ah ok, the crux of our disagreement is how much you value the paperclipper type scenario that I’d consider a very bad outcome. If you think that outcome is good then yeah, that licenses you in this formula to conclude that rushing toward AI is good.