If you would come to very similar March but object to details of the current framing, please let me know in the comments, and consider registering your email for the “Keep me informed” checkbox without making the commitment.
There’s a decent chance I would join for the March as is given that I directionally agree with its sentiment and its recommendation. But I don’t agree with some of the “We believe...” statements, which sound like they are intended to speak for all of the people who came to the March.
I disagree with these:
We believe that if any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today.
This is stated quite confidently, implying >>50% on this, while I have less than 50%. Well maybe it could be over 50%, if there is a strict operationalization of what counts as remotely similar to current techniques and present understanding. In any case, I think I disagree with what most people would takeaway from this statement.
It’s not useful for only one country to ban advancement of AI capabilities within its own borders. AI development would just keep happening in other countries by people who didn’t understand the dangers, until eventually someone somewhere built machines that were substantially smarter than any human.
This seems to imply that the US government could not on its own significantly decrease p(doom). That seems very wrong to me, implementing a slowdown for a few months to a year at the right moment seems like a huge deal. An international treaty would be better, but this seems too defeatist about domestic options.
Yeah I think (not speaking for MIRI) that the FAQ should rephrased so the vibe is more “here’s what we believe, but, there’s a bunch of reasons you might want to support this.”
> It’s not useful for only one country to ban advancement of AI capabilities within its own borders.
This seems to imply that the US government could not on its own significantly decrease p(doom).
I think my personal beliefs would say “it’s not very useful” or something. I think the “ban AGI locally” plan is dependent on a pretty specific path to be useful and I don’t read the current phrasing as ruling out “One country Bans it and also does some other stuff in conjunction.” (actually, upon reflection I’m not that confident I know what sort of scenario you have in mind here)
I think my personal beliefs would say “it’s not very useful” or something. I think the “ban AGI locally” plan is dependent on a pretty specific path to be useful and I don’t read the current phrasing as ruling out “One country Bans it and also does some other stuff in conjunction.” (actually, upon reflection I’m not that confident I know what sort of scenario you have in mind here)
I think that a slowdown that is in the neighborhood of “ban AI development temporarily near but not after max-controllable AI” could potentially be very impactful. Banning AI development for long enough to allow China to pull ahead is less clear. I’m not sure what the intention of the sentence was, but to me it seems to imply that any domestic action on its own would be of very little use.
The thing I saw the sentence as doing is mostly clarifying “We’re not naive, obviously just doing the naive thing here would not work, that’s why we’re not asking for it.” (I think I agree that a US ban would be some-kind-of-useful but it feels way less politically viable to me, since it feels more like throwing away the lead for no reason to most people. I realize it may sound weird to think “banning in one country less viable than banning worldwide”, but, I think the ban worldwide actually makes clearly makes sense in a way that banning l locally only maybe makes sense if you tune the parameters just right)
“ban AI development temporarily near but not after max-controllable AI”
I’m not sure I’m parsing the grammar here, wondering if you flipped the sign or I’m misreading. (it sounds like “AIs that are almost uncontrollable are banned, uncontrollably powerful AIs are allowed)
Went here to post something similar. My quick guess is that such a march would get a better fit of attendance times direction by expanding the scope a fair bit, but I realize that the creators have certain preferences here.
There’s a decent chance I would join for the March as is given that I directionally agree with its sentiment and its recommendation. But I don’t agree with some of the “We believe...” statements, which sound like they are intended to speak for all of the people who came to the March.
I disagree with these:
This is stated quite confidently, implying >>50% on this, while I have less than 50%. Well maybe it could be over 50%, if there is a strict operationalization of what counts as remotely similar to current techniques and present understanding. In any case, I think I disagree with what most people would takeaway from this statement.
This seems to imply that the US government could not on its own significantly decrease p(doom). That seems very wrong to me, implementing a slowdown for a few months to a year at the right moment seems like a huge deal. An international treaty would be better, but this seems too defeatist about domestic options.
Yeah I think (not speaking for MIRI) that the FAQ should rephrased so the vibe is more “here’s what we believe, but, there’s a bunch of reasons you might want to support this.”
I think my personal beliefs would say “it’s not very useful” or something. I think the “ban AGI locally” plan is dependent on a pretty specific path to be useful and I don’t read the current phrasing as ruling out “One country Bans it and also does some other stuff in conjunction.” (actually, upon reflection I’m not that confident I know what sort of scenario you have in mind here)
I think that a slowdown that is in the neighborhood of “ban AI development temporarily near but not after max-controllable AI” could potentially be very impactful. Banning AI development for long enough to allow China to pull ahead is less clear. I’m not sure what the intention of the sentence was, but to me it seems to imply that any domestic action on its own would be of very little use.
The thing I saw the sentence as doing is mostly clarifying “We’re not naive, obviously just doing the naive thing here would not work, that’s why we’re not asking for it.” (I think I agree that a US ban would be some-kind-of-useful but it feels way less politically viable to me, since it feels more like throwing away the lead for no reason to most people. I realize it may sound weird to think “banning in one country less viable than banning worldwide”, but, I think the ban worldwide actually makes clearly makes sense in a way that banning l locally only maybe makes sense if you tune the parameters just right)
I’m not sure I’m parsing the grammar here, wondering if you flipped the sign or I’m misreading. (it sounds like “AIs that are almost uncontrollable are banned, uncontrollably powerful AIs are allowed)
Went here to post something similar. My quick guess is that such a march would get a better fit of attendance times direction by expanding the scope a fair bit, but I realize that the creators have certain preferences here.