Coordination to do something is hard, and possible only because it doesn’t require everyone agree, only enough people to do the thing. Coordination NOT to do something that’s obviously valuable (but carries risks) is _MUCH_ harder, because it requires agreement (or at least compliance and monitoring) from literally everyone.
It’s not a question of getting harder or easier to coordinate over time—it’s not possible to prevent AGI research now, and it won’t become any less or more possible later. It’s mostly a race to understand safety well enough to publish mechanisms to mitigate and reduce risks BEFORE a major self-improving AGI can be built by someone.
Coordination to do something is hard, and possible only because it doesn’t require everyone agree, only enough people to do the thing. Coordination NOT to do something that’s obviously valuable (but carries risks) is _MUCH_ harder, because it requires agreement (or at least compliance and monitoring) from literally everyone.
It’s not a question of getting harder or easier to coordinate over time—it’s not possible to prevent AGI research now, and it won’t become any less or more possible later. It’s mostly a race to understand safety well enough to publish mechanisms to mitigate and reduce risks BEFORE a major self-improving AGI can be built by someone.