I am not really clear that I should be worried on the scale of decades? If we’re doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).
while I agree for smaller numbers like a few decades, I don’t think I agree with a 1000 year pause.
I think (a) it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime (forget future people and discount factors thereof—almost every single person alive today cares ooms more about themselves than about some random person on the other side of the planet), (b) it’s easy for “delay forever” people to basically pascal’s mug you this way, as in nuclear power (c) it’s unclear that humanity becomes monotonically more wise over time (as an unrealistic example, consider a world where we successfully create an international treaty to ensure ASI is safe, and then for some reason the entire world modern order collapses and the only actors left are random post-collapse states racing to build ASI. then it would have been better to build ASI in a functional pre-collapse world order than to delay. one could reasonably (though i personally don’t) believe that the current world order is likely to fail in the coming decades and ASI is best built now than in the ensuing chaos)
it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime
Yes people are selfish, that is why you should sometimes be ready to fight against them. Point a is not a disagreement with Ben.
then for some reason the entire world modern order collapses
This is low probability on time scale of decades but is an argument people can use to justify their self-serving desires for immortality as somehow altruistic.
I agree that 200 years would be worth it if we actually thought that it would work. My concern is that it’s not clear civilization would get better/moresane/etc. over the next century vs. worse. And relatedly, every decade that goes by, we eat another percentage point or three of x-risk from miscellaneous other sources (nuclear war, pandemics, etc.) which basically impose a time-discount factor on our calculations large enough to make a 200 year pause seem really dangerous and bad to me.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.
I am not really clear that I should be worried on the scale of decades? If we’re doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).
while I agree for smaller numbers like a few decades, I don’t think I agree with a 1000 year pause.
I think (a) it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime (forget future people and discount factors thereof—almost every single person alive today cares ooms more about themselves than about some random person on the other side of the planet), (b) it’s easy for “delay forever” people to basically pascal’s mug you this way, as in nuclear power (c) it’s unclear that humanity becomes monotonically more wise over time (as an unrealistic example, consider a world where we successfully create an international treaty to ensure ASI is safe, and then for some reason the entire world modern order collapses and the only actors left are random post-collapse states racing to build ASI. then it would have been better to build ASI in a functional pre-collapse world order than to delay. one could reasonably (though i personally don’t) believe that the current world order is likely to fail in the coming decades and ASI is best built now than in the ensuing chaos)
Yes people are selfish, that is why you should sometimes be ready to fight against them. Point a is not a disagreement with Ben.
This is low probability on time scale of decades but is an argument people can use to justify their self-serving desires for immortality as somehow altruistic.
I agree that 200 years would be worth it if we actually thought that it would work. My concern is that it’s not clear civilization would get better/moresane/etc. over the next century vs. worse. And relatedly, every decade that goes by, we eat another percentage point or three of x-risk from miscellaneous other sources (nuclear war, pandemics, etc.) which basically impose a time-discount factor on our calculations large enough to make a 200 year pause seem really dangerous and bad to me.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.