The number of people who actually had the deep technical skills and knowledge to evaluate the risk of ignition of the atmosphere from nuclear weapons was very small, and near completely overlapped with the people developing the very same weapons of mass destruction that were the source of that risk.
The number of people who have the deep technical knowledge, skills, and talent necessary to correctly evaluate the actual risk of AGI doom is probably small, and probably overlaps necessarily with the people most capable of creating it.
I’m curious how this fits into the context. Regardless of whether or not one believes it’s true, doesn’t it seem reasonable and intuitively right—so the opposite of what is asked for?
I think the argument that would have been seen as ridiculous by most in the nuclear weapons example is, “The right arrangement of (essentially, what look like) rocks and metals will not only make a big explosion, but could destroy all life on earth in seconds.” The argument in favor (and eventual, correct argument against) were both highly technical and inaccessible. Also the people most involved in the deep technical weeds were both the ones capable of seeing the danger, and the ones needed to figure out if the danger was real or not.
So it would be:
Claim: A nuclear bomb could set the atmosphere on fire and destroy everything on earth
Argument: Someone did a calculation.
Counterargument: Clearly, that’s absurd.
Good Counterargument: Someone else did another calculation.
And I guess the analogy to AI applies foom/room a the bottom, where one can actually do calculations to at least in principle estimate some OOMs.
The number of people who actually had the deep technical skills and knowledge to evaluate the risk of ignition of the atmosphere from nuclear weapons was very small, and near completely overlapped with the people developing the very same weapons of mass destruction that were the source of that risk.
The number of people who have the deep technical knowledge, skills, and talent necessary to correctly evaluate the actual risk of AGI doom is probably small, and probably overlaps necessarily with the people most capable of creating it.
I’m curious how this fits into the context. Regardless of whether or not one believes it’s true, doesn’t it seem reasonable and intuitively right—so the opposite of what is asked for?
I think the argument that would have been seen as ridiculous by most in the nuclear weapons example is, “The right arrangement of (essentially, what look like) rocks and metals will not only make a big explosion, but could destroy all life on earth in seconds.” The argument in favor (and eventual, correct argument against) were both highly technical and inaccessible. Also the people most involved in the deep technical weeds were both the ones capable of seeing the danger, and the ones needed to figure out if the danger was real or not.
So it would be: Claim: A nuclear bomb could set the atmosphere on fire and destroy everything on earth Argument: Someone did a calculation. Counterargument: Clearly, that’s absurd. Good Counterargument: Someone else did another calculation.
And I guess the analogy to AI applies foom/room a the bottom, where one can actually do calculations to at least in principle estimate some OOMs.