Ideally, sure, except that I don’t know of a way to make “assist humans” be a safe goal. So I’m advocating for a variant of “treat humans as you would want to be treated”, which I think can be trained
Ideally, sure, except that I don’t know of a way to make “assist humans” be a safe goal. So I’m advocating for a variant of “treat humans as you would want to be treated”, which I think can be trained