I am clearly coming from a very different set of assumptions! I have:
P(AGI within 10 years) = 0.5. This is probably too conservative, given that many of the actual engineers with inside knowledge place this number much higher in anonymous surveys.
P(ASI within 5 years|AGI) = 0.9.
P(loss of control within 5 years|ASI) > 0.9. Basically, I believe “alignment” is a fairy tale, that it’s Not Even Wrong.
If I do the math, that gives me a 40.5% chance that humans will completely lose control over the future within 20 years. Which seems high to me at first glance, but I’m willing to go with that.
The one thing I can’t figure out how to estimate is:
P(ASI is benevolent|uncontrolled ASI) = ???
I think that there are only a few ways the future is likely to go:
AI progress hits a wall, hard.
We have a permanent, worldwide moratorium on more advanced models. Picture a US/China/EU treaty backed up by military force, if you want to get dystopian about it.
An ASI decides humans are surplus to requirements.
An ASI decides that humans are adorable pets and it wants keep some of this around. This is the only place we get any “utopian” benefits, and it’s the utopia of being a domesticated animal with no ability to control its fate.
I support a permanent halt. I have no expectation that this will happen. I think building ASI is equivalent to BASE jumping in a wingsuit, except even more likely to end horribly.
So I also support mitigation and delay. If the human race has incurable, metastatic cancer, the remaining variable we control is how many good years we get before the end.
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Unfortunately, it was about 3 or 4 months ago, and I haven’t been able to find the source. Maybe something Zvi Mowshowitz linked to in a weekly update?
I am incredibly frustrated that web search is a swamp of AI spam, and tagged bookmarking tools like Delicious and Pinboard have been gone or unreliable for years.
I am clearly coming from a very different set of assumptions! I have:
P(AGI within 10 years) = 0.5. This is probably too conservative, given that many of the actual engineers with inside knowledge place this number much higher in anonymous surveys.
P(ASI within 5 years|AGI) = 0.9.
P(loss of control within 5 years|ASI) > 0.9. Basically, I believe “alignment” is a fairy tale, that it’s Not Even Wrong.
If I do the math, that gives me a 40.5% chance that humans will completely lose control over the future within 20 years. Which seems high to me at first glance, but I’m willing to go with that.
The one thing I can’t figure out how to estimate is:
P(ASI is benevolent|uncontrolled ASI) = ???
I think that there are only a few ways the future is likely to go:
AI progress hits a wall, hard.
We have a permanent, worldwide moratorium on more advanced models. Picture a US/China/EU treaty backed up by military force, if you want to get dystopian about it.
An ASI decides humans are surplus to requirements.
An ASI decides that humans are adorable pets and it wants keep some of this around. This is the only place we get any “utopian” benefits, and it’s the utopia of being a domesticated animal with no ability to control its fate.
I support a permanent halt. I have no expectation that this will happen. I think building ASI is equivalent to BASE jumping in a wingsuit, except even more likely to end horribly.
So I also support mitigation and delay. If the human race has incurable, metastatic cancer, the remaining variable we control is how many good years we get before the end.
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Unfortunately, it was about 3 or 4 months ago, and I haven’t been able to find the source. Maybe something Zvi Mowshowitz linked to in a weekly update?
I am incredibly frustrated that web search is a swamp of AI spam, and tagged bookmarking tools like Delicious and Pinboard have been gone or unreliable for years.