To clarify, I’m thinking mostly about the strength of the strongest counter-argument, not the quantity of counter-arguments.
But yes, what counts as a strong argument is a bit subjective and a continuum. I wrote this post because of the counter-arguments I know I know of are strong enough to be “strong” by my standards.
Personally my strongest counter-argument is “humanity actually will recognize the x-risk in time to take alignment seriously, delaying the development of ASI if necessary”, but even that isn’t backed up by too much evidence (the only previous example I know of is when we avoided nuclear holocaust).
To clarify, I’m thinking mostly about the strength of the strongest counter-argument, not the quantity of counter-arguments.
But yes, what counts as a strong argument is a bit subjective and a continuum. I wrote this post because of the counter-arguments I know I know of are strong enough to be “strong” by my standards.
Personally my strongest counter-argument is “humanity actually will recognize the x-risk in time to take alignment seriously, delaying the development of ASI if necessary”, but even that isn’t backed up by too much evidence (the only previous example I know of is when we avoided nuclear holocaust).
What do you think are the strongest arguments in that list, and why are they weaker than a vague “oh maybe we’ll figure it out”?
Hmm, Where I agree and disagree with Eliezer actually has some pretty decent counter-arguments, at least in the sense of making things less certain.
However, I still think that there’s a problem of “the NN writes a more traditional AGI that is capable of foom and runs it”.