In general, I think that how the IE happens and is governed is a much bigger deal than when it happens.
(I don’t have much hope in trying to actually litigate any of this, but:)
Bro. It’s not governed, and if it happens any time soon it won’t be aligned. That’s the whole point.
The right response is an “everything and the kitchen sink” approach — there are loads of things we can do that all help a bit in expectation (both technical and governance, including mechanisms to slow the intelligence explosion), many of which are easy wins, and right now we should be pushing on most of them.
How do these small kitchen sinks add up to pushing back AGI by, say, several decades? Or add up to making an AGI that doesn’t kill everyone? My super-gloss of the convo is:
IABIED: We’re plummeting toward AGI at an unknown rate and distance; we should stop that; to stop that we’d have to do this really big hard thing; so we should do that.
You: Instead, we should do smaller things. And you’re distracting people from doing smaller things.
Is that right? Why isn’t “propose to the public a plan that would actually work” one of your small things?
(I don’t have much hope in trying to actually litigate any of this, but:)
Bro. It’s not governed, and if it happens any time soon it won’t be aligned. That’s the whole point.
How do these small kitchen sinks add up to pushing back AGI by, say, several decades? Or add up to making an AGI that doesn’t kill everyone? My super-gloss of the convo is:
IABIED: We’re plummeting toward AGI at an unknown rate and distance; we should stop that; to stop that we’d have to do this really big hard thing; so we should do that.
You: Instead, we should do smaller things. And you’re distracting people from doing smaller things.
Is that right? Why isn’t “propose to the public a plan that would actually work” one of your small things?