I agree with this fwiw. Currently I think we are in way way more danger of rushing to build it too fast than of never building it at all, but if e.g. all the nations of the world had agreed to ban it, and in fact were banning AI research more generally, and the ban had held stable for decades and basically strangled the field, I’d be advocating for judicious relaxation of the regulations (same thing I advocate for nuclear power basically).
I am not really clear that I should be worried on the scale of decades? If we’re doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).
I agree that 200 years would be worth it if we actually thought that it would work. My concern is that it’s not clear civilization would get better/moresane/etc. over the next century vs. worse. And relatedly, every decade that goes by, we eat another percentage point or three of x-risk from miscellaneous other sources (nuclear war, pandemics, etc.) which basically impose a time-discount factor on our calculations large enough to make a 200 year pause seem really dangerous and bad to me.
I think the same world that coordinated well enough to do a centuries-long AGI pause (without heralding a dark age of negative economic growth or global totalitarianism, etc) is probably also more than capable of preventing thermonuclear war, extinction-level artificial pandemics, grey goo, etc.
At that point your biggest risks are natural x-risks (very low), authoritarian backsliding, meme wars, and some fraction of unknown unknowns.
while I agree for smaller numbers like a few decades, I don’t think I agree with a 1000 year pause.
I think (a) it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime (forget future people and discount factors thereof—almost every single person alive today cares ooms more about themselves than about some random person on the other side of the planet), (b) it’s easy for “delay forever” people to basically pascal’s mug you this way, as in nuclear power (c) it’s unclear that humanity becomes monotonically more wise over time (as an unrealistic example, consider a world where we successfully create an international treaty to ensure ASI is safe, and then for some reason the entire world modern order collapses and the only actors left are random post-collapse states racing to build ASI. then it would have been better to build ASI in a functional pre-collapse world order than to delay. one could reasonably (though i personally don’t) believe that the current world order is likely to fail in the coming decades and ASI is best built now than in the ensuing chaos)
it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime
Yes people are selfish, that is why you should sometimes be ready to fight against them. Point a is not a disagreement with Ben.
then for some reason the entire world modern order collapses
This is low probability on time scale of decades but is an argument people can use to justify their self-serving desires for immortality as somehow altruistic.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.
>slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Is it? What meaning of worth it is used here? If you put it on a vote, as an option, I expect it would lose. People don’t care that much about happiness of distant future people.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making
Make it a thousand, or two thousand. To Daniel’s point, societal change is not always positive on the scale of centuries. But on the scale of millennia it is. At least the last few ones.
SI can come when we say so. On the human evolutionary timescale, 10K years is short. On a cosmic timescale, it is nothing.
I agree with this fwiw. Currently I think we are in way way more danger of rushing to build it too fast than of never building it at all, but if e.g. all the nations of the world had agreed to ban it, and in fact were banning AI research more generally, and the ban had held stable for decades and basically strangled the field, I’d be advocating for judicious relaxation of the regulations (same thing I advocate for nuclear power basically).
I am not really clear that I should be worried on the scale of decades? If we’re doing a calculation of expected future years of a flourishing technologically mature civilization, slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Given this, it seems plausible to me that one should rather spend 200 years trying to improve civilizational wisdom and decision-making rather than instead attempt to specifically just unlock regulation on AI (of course the specifics here are cruxy).
I agree that 200 years would be worth it if we actually thought that it would work. My concern is that it’s not clear civilization would get better/moresane/etc. over the next century vs. worse. And relatedly, every decade that goes by, we eat another percentage point or three of x-risk from miscellaneous other sources (nuclear war, pandemics, etc.) which basically impose a time-discount factor on our calculations large enough to make a 200 year pause seem really dangerous and bad to me.
I think the same world that coordinated well enough to do a centuries-long AGI pause (without heralding a dark age of negative economic growth or global totalitarianism, etc) is probably also more than capable of preventing thermonuclear war, extinction-level artificial pandemics, grey goo, etc.
At that point your biggest risks are natural x-risks (very low), authoritarian backsliding, meme wars, and some fraction of unknown unknowns.
while I agree for smaller numbers like a few decades, I don’t think I agree with a 1000 year pause.
I think (a) it’s perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime (forget future people and discount factors thereof—almost every single person alive today cares ooms more about themselves than about some random person on the other side of the planet), (b) it’s easy for “delay forever” people to basically pascal’s mug you this way, as in nuclear power (c) it’s unclear that humanity becomes monotonically more wise over time (as an unrealistic example, consider a world where we successfully create an international treaty to ensure ASI is safe, and then for some reason the entire world modern order collapses and the only actors left are random post-collapse states racing to build ASI. then it would have been better to build ASI in a functional pre-collapse world order than to delay. one could reasonably (though i personally don’t) believe that the current world order is likely to fail in the coming decades and ASI is best built now than in the ensuing chaos)
Yes people are selfish, that is why you should sometimes be ready to fight against them. Point a is not a disagreement with Ben.
This is low probability on time scale of decades but is an argument people can use to justify their self-serving desires for immortality as somehow altruistic.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.
>slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Is it? What meaning of worth it is used here? If you put it on a vote, as an option, I expect it would lose. People don’t care that much about happiness of distant future people.
Make it a thousand, or two thousand. To Daniel’s point, societal change is not always positive on the scale of centuries. But on the scale of millennia it is. At least the last few ones.
SI can come when we say so. On the human evolutionary timescale, 10K years is short. On a cosmic timescale, it is nothing.