You seem to jump from “our terminal values are a function of our evolutionary history” to “our terminal values do not include terms for the well-being of any aliens which we might encounter”, which does not follow and which is, as evidenced by this post, untrue. A CEV-based FAI would spend resources to help aliens to exactly the degree that we would care about those aliens.
I didn’t make that jump, I wrote this post as after reading that jump in the thread linked to at the top, which prompted me to think more about the issue.
Assuming that Desrtopa isn’t weird (or WEIRD), it would help if we knew the critical causes of humans’ caring about alien values. For example, suppose social animals, upon achieving intelligence and language, come to care about the values of anyone they can hold an intelligent conversation with. (Not that we know this now—but perhaps we could, later, if true.) In that case, we may be safe as long as intelligent aliens are likely to be social animals. (ETA: provided, duh, that they don’t construct uFAI and destroy themselves, then us.)
You seem to jump from “our terminal values are a function of our evolutionary history” to “our terminal values do not include terms for the well-being of any aliens which we might encounter”, which does not follow and which is, as evidenced by this post, untrue. A CEV-based FAI would spend resources to help aliens to exactly the degree that we would care about those aliens.
I didn’t make that jump, I wrote this post as after reading that jump in the thread linked to at the top, which prompted me to think more about the issue.
Our FAI might care about alien values, but that doesn’t mean an alien FAI would care about ours.
Assuming that Desrtopa isn’t weird (or WEIRD), it would help if we knew the critical causes of humans’ caring about alien values. For example, suppose social animals, upon achieving intelligence and language, come to care about the values of anyone they can hold an intelligent conversation with. (Not that we know this now—but perhaps we could, later, if true.) In that case, we may be safe as long as intelligent aliens are likely to be social animals. (ETA: provided, duh, that they don’t construct uFAI and destroy themselves, then us.)