He sets aside the difference between oneself dying eventually and there literally being no recognizable posterity, which I think makes this text relatively uninteresting. A future with 0 humans or any kind of humanity, where some alien entity transforms this part of the universe in what would appear to be a horrible scar with unrecognizable values. Also sets aside literally everyone getting violently slaughtered instead of most dying peacefully at 80 years old or worse outcomes than death.
But even given the selfish perspective, I just sort of guess that trying to get such a huge amount of numbers out of a sort of contrived theory is just not a good idea. The numbers range from 0-1000 years, so I don’t know what to take from this. Plugging in my estimated numbers in Table 6 sort of gets me to somewhat correct seeming numbers, though I may not fully get what the author meant.
I think that all considered, there are much better choices than accelerating AI, such as improving human intelligence. Improved human intelligence would extent lifespan, would help us solve the alignment problem, would improve quality of life. We can also make investments into lifespan/quality of life research. Overall a much better deal than building unaligned ASI now.
He sets aside the difference between oneself dying eventually and there literally being no recognizable posterity, which I think makes this text relatively uninteresting. A future with 0 humans or any kind of humanity, where some alien entity transforms this part of the universe in what would appear to be a horrible scar with unrecognizable values. Also sets aside literally everyone getting violently slaughtered instead of most dying peacefully at 80 years old or worse outcomes than death.
But even given the selfish perspective, I just sort of guess that trying to get such a huge amount of numbers out of a sort of contrived theory is just not a good idea. The numbers range from 0-1000 years, so I don’t know what to take from this. Plugging in my estimated numbers in Table 6 sort of gets me to somewhat correct seeming numbers, though I may not fully get what the author meant.
I think that all considered, there are much better choices than accelerating AI, such as improving human intelligence. Improved human intelligence would extent lifespan, would help us solve the alignment problem, would improve quality of life. We can also make investments into lifespan/quality of life research. Overall a much better deal than building unaligned ASI now.