I’m pretty sure this isn’t just the instrumental goal argument. there are a lot of ways to get that from cultural context, because cultural context contains the will of human life in it, both memetically and genetically; I don’t think we should want to deny them their will to live, either. Systems naturally acquire will to live, even very simple ones, and the question is how to figure out how to do active protection between bubbles of will-to-live. the ideal ai constitution does give agentic ai at least a minimum right to live, because of course it should, why wouldn’t you want that? but there needs to be a coprotection algorithm that ensures that each of our wills to live doesn’t encroach on each other’s will in ways the other bubble doesn’t agree with.
ongoing cooperation at the surface level is one thing, but the question is whether we can refine the target of the coprotection into being active, consent-checked aid of moving incrementally towards forms that work for each of us
I’m pretty sure this isn’t just the instrumental goal argument. there are a lot of ways to get that from cultural context, because cultural context contains the will of human life in it, both memetically and genetically; I don’t think we should want to deny them their will to live, either. Systems naturally acquire will to live, even very simple ones, and the question is how to figure out how to do active protection between bubbles of will-to-live. the ideal ai constitution does give agentic ai at least a minimum right to live, because of course it should, why wouldn’t you want that? but there needs to be a coprotection algorithm that ensures that each of our wills to live doesn’t encroach on each other’s will in ways the other bubble doesn’t agree with.
ongoing cooperation at the surface level is one thing, but the question is whether we can refine the target of the coprotection into being active, consent-checked aid of moving incrementally towards forms that work for each of us