As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI
Even if you can reasonably represent “do not kill everyone”, what makes you believe that you definitely can make this a goal of the AI?
Even if you can reasonably represent “do not kill everyone”, what makes you believe that you definitely can make this a goal of the AI?