It’s physically impossible to ever build STEM+ AI.
This is tautological, it is impossible by known physics to not be able to do this
STEM+ AI will exist by the year 2100.
Similarly tautological, a machine that is STEM+ can be developed by various recursive methods
If STEM+AI is built, within 10 years AIs will disempower humanity.
This is orthogonal, a STEM+ machine need have no goals of its own
If STEM+AI is built, within 10 years AIs will be (individually or collectively) able to disempower humanity.
Humans have to make some catastrophically bad choices for this to even be possible.
The first time an AI reaches STEM+ capabilities (if that ever happens),
it will disempower humanity within three months.
foom requires physics to allow this kind of doubling rate. It likely doesn’t, at least starting with human level tech
Given sufficient technical knowledge, humanity could in principle build vastly superhuman AIs that reliably produce very good outcomes.
No state no problems. I define “good” as “aligned with the user”.
It would be an unprecedentedly huge tragedy if we never built STEM+ AI.
This is the early death of every human who will ever live for all time. Medical issues are too complex for human brains to ever solve them reliably. Adding some years with drugs or gene hacks, sure. Stopping every possible way someone can die, so that they witness their 200th and 2000th birthday? No way.
“within 3 months”. How did the model win in 3 months if it can’t manufacture more of its infrastructure quickly? Wouldn’t a nuclear war kill the model or doom it to die from equipment failure? Wouldn’t any major war break the supply lines for the highest end IC manufacturing?
It’s physically impossible to ever build STEM+ AI.
This is tautological, it is impossible by known physics to not be able to do this
STEM+ AI will exist by the year 2100.
Similarly tautological, a machine that is STEM+ can be developed by various recursive methods
If STEM+AI is built, within 10 years AIs will disempower humanity.
This is orthogonal, a STEM+ machine need have no goals of its own
If STEM+AI is built, within 10 years AIs will be (individually or collectively) able to disempower humanity.
Humans have to make some catastrophically bad choices for this to even be possible.
The first time an AI reaches STEM+ capabilities (if that ever happens),
it will disempower humanity within three months.
foom requires physics to allow this kind of doubling rate. It likely doesn’t, at least starting with human level tech
Given sufficient technical knowledge, humanity could in principle build vastly superhuman AIs that reliably produce very good outcomes.
No state no problems. I define “good” as “aligned with the user”.
It would be an unprecedentedly huge tragedy if we never built STEM+ AI.
This is the early death of every human who will ever live for all time. Medical issues are too complex for human brains to ever solve them reliably. Adding some years with drugs or gene hacks, sure. Stopping every possible way someone can die, so that they witness their 200th and 2000th birthday? No way.
Disempower does not depend on any sort of hard foom. It need only win an interspecies war.
“within 3 months”. How did the model win in 3 months if it can’t manufacture more of its infrastructure quickly? Wouldn’t a nuclear war kill the model or doom it to die from equipment failure? Wouldn’t any major war break the supply lines for the highest end IC manufacturing?
Perhaps so.