I’m mostly just rambling about stuff that is totally missing. Basically, I’m respectively referring to ‘Don’t explode the gas main to blow the people out of the burning building’, ‘Don’t wirehead’ and ‘How do you utilitarianism?’.
I understand. And if/when we crack those philosophical problems in a sufficiently general way, we will still be left with the technical problem of “how do we represent the relevant parts of reality and what we want out of it in a computable form so the AI can find the optimum”?
I’m mostly just rambling about stuff that is totally missing. Basically, I’m respectively referring to ‘Don’t explode the gas main to blow the people out of the burning building’, ‘Don’t wirehead’ and ‘How do you utilitarianism?’.
I understand. And if/when we crack those philosophical problems in a sufficiently general way, we will still be left with the technical problem of “how do we represent the relevant parts of reality and what we want out of it in a computable form so the AI can find the optimum”?