while the risk from a superagentic ai is in fact very severe, non-agentic ai doesn’t need to eliminate us for us to get eliminated, we’ll replace ourselves with it if we’re not careful—our agency is enough to converge to that, entirely without the help of ai agency. it is our own ability to cooperate we need to be augmenting; how do we do that in a way that doesn’t create unstable patterns where outer levels of cooperation are damaged by inner levels of cooperation, while still allowing the formation of strongly agentic safe co-protection?
while the risk from a superagentic ai is in fact very severe, non-agentic ai doesn’t need to eliminate us for us to get eliminated, we’ll replace ourselves with it if we’re not careful—our agency is enough to converge to that, entirely without the help of ai agency. it is our own ability to cooperate we need to be augmenting; how do we do that in a way that doesn’t create unstable patterns where outer levels of cooperation are damaged by inner levels of cooperation, while still allowing the formation of strongly agentic safe co-protection?