I am an unironic supporter of a version of the Copenhagen Interpretation of Ethics—there’s no general obligation to be aligned with humanity, but if you choose to massively effect the future of humanity, e.g. via creating superhuman AI, you then have a moral duty to carry out that action in a manner approximating if you were aligned.
I am an unironic supporter of a version of the Copenhagen Interpretation of Ethics—there’s no general obligation to be aligned with humanity, but if you choose to massively effect the future of humanity, e.g. via creating superhuman AI, you then have a moral duty to carry out that action in a manner approximating if you were aligned.