There are sometimes deadlines, such that we could get unacceptable outcomes by failing to make a particular sort of progress by the time a particular state arrives. Both referring to these fields as possibly needing to be fully solved, and referring to them as not containing things that might need to be solved by a deadline, are quite misleading.
Yea I agree it totally makes sense and is important to ask whether we understand things well enough for it to be fine to (let anyone) do some particular thing, for various particular things here.[1] And my previous comment is indeed potentially misleading given that I didn’t clarify this (though I do clarify this in the linked post).
Indeed, I think we should presently ban AGI for at least a very long time; I think it’s plausible that there is no time t such that it is fine at time t to make an AI that is (1) more capable than humans/humanity at time t and (2) not just a continuation of a human (like, a mind upload) or humanity or sth like that; and I think fooming should probably be carefully regulated forever. I think humans/humanity should be carefully growing ever more capable, with no non-human AIs above humans/humanity plausibly ever.
There are sometimes deadlines, such that we could get unacceptable outcomes by failing to make a particular sort of progress by the time a particular state arrives. Both referring to these fields as possibly needing to be fully solved, and referring to them as not containing things that might need to be solved by a deadline, are quite misleading.
Yea I agree it totally makes sense and is important to ask whether we understand things well enough for it to be fine to (let anyone) do some particular thing, for various particular things here.[1] And my previous comment is indeed potentially misleading given that I didn’t clarify this (though I do clarify this in the linked post).
Indeed, I think we should presently ban AGI for at least a very long time; I think it’s plausible that there is no time t such that it is fine at time t to make an AI that is (1) more capable than humans/humanity at time t and (2) not just a continuation of a human (like, a mind upload) or humanity or sth like that; and I think fooming should probably be carefully regulated forever. I think humans/humanity should be carefully growing ever more capable, with no non-human AIs above humans/humanity plausibly ever.