I signed the statement. My concern, which you don’t address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don’t think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won’t lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won’t define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
I want that statement too but it doesn’t seem like what this one’s job is. This one is for establishing common knowledge “it’d be bad to build ASI under current conditions”, there probably wouldn’t be enough consensus that ”...and that means stop building AGI” yet so it wouldn’t be very useful to try.
I signed the statement. My concern, which you don’t address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don’t think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won’t lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won’t define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
I want that statement too but it doesn’t seem like what this one’s job is. This one is for establishing common knowledge “it’d be bad to build ASI under current conditions”, there probably wouldn’t be enough consensus that ”...and that means stop building AGI” yet so it wouldn’t be very useful to try.