Unless we have an AI that shares our mind architecture (like in Steven Byrnes’ agenda)
I think there’s an important distinction here between (a) “including human value concepts” and (b) “being able to point at human value concepts”. Systems sharing our mind architecture make (a) more likely but do not make (b) more likely, and I think (b) is required for good outcomes.
I think there’s an important distinction here between (a) “including human value concepts” and (b) “being able to point at human value concepts”. Systems sharing our mind architecture make (a) more likely but do not make (b) more likely, and I think (b) is required for good outcomes.