This is similar to a discussion I wanted to start, I’ll just leave a comment here instead:
If we were to detect the presence of an alien civilisation before the SIAI implements CEV, should they account for their extrapolated volition?
I ask myself what advice I would give to terrorists, if they were programming a superintelligence and honestly wanted not to screw it up, and then that is the advice I follow myself.
Eliezer Yudkowsky
I ask myself what advice I would give to aliens, if they were programming a superintelligence and honestly wanted not to screw it up, and then that is the advice I follow myself.
Eliezer Yudkowsky (counterfactual)
There are a few problems:
Not accounting for the alien volition could equal genocide.
Accounting for the alien volition could outweigh our volition by their sheer number (e.g. if they are insectoids).
Both arguments are bilateral. If you accept the premise that the best way is to account for all agents then we are left with the problem of possibly being a minority. But what appears much more likely is that we’ll be risk-averse and expect the aliens not to follow the same line of reasoning. The FAI’s of both civilizations might try to subdue the other.
What implications would arise from the detection of an alien civilization technologically similar to ours?
Accounting for the alien volition could outweigh our volition by their sheer number (e.g. if they are insectoids).
If we account for alien volition directly, then yes, this could be a problem. But if we only care about aliens because we’re implementing CEV and some humans care about aliens, then scope insensitivity comes into play and the amount of resources that will be dedicated to the aliens is limited.
Scope insensitivity is a failure to properly account for certain things. CEV is designed to account for everything. It is possible that some conclusions arrived at due to scope insensitivity will be upheld, but we do not yet know whether that is true and current human choices that we know to be the product of biases definitely do not count as evidence about how CEV will choose.
This is similar to a discussion I wanted to start, I’ll just leave a comment here instead:
If we were to detect the presence of an alien civilisation before the SIAI implements CEV, should they account for their extrapolated volition?
Eliezer Yudkowsky
Eliezer Yudkowsky (counterfactual)
There are a few problems:
Not accounting for the alien volition could equal genocide.
Accounting for the alien volition could outweigh our volition by their sheer number (e.g. if they are insectoids).
Both arguments are bilateral. If you accept the premise that the best way is to account for all agents then we are left with the problem of possibly being a minority. But what appears much more likely is that we’ll be risk-averse and expect the aliens not to follow the same line of reasoning. The FAI’s of both civilizations might try to subdue the other.
What implications would arise from the detection of an alien civilization technologically similar to ours?
For analogous reasons CEV does not sound particularly ‘Friendly’ to me.
If we account for alien volition directly, then yes, this could be a problem. But if we only care about aliens because we’re implementing CEV and some humans care about aliens, then scope insensitivity comes into play and the amount of resources that will be dedicated to the aliens is limited.
Scope insensitivity is a failure to properly account for certain things. CEV is designed to account for everything. It is possible that some conclusions arrived at due to scope insensitivity will be upheld, but we do not yet know whether that is true and current human choices that we know to be the product of biases definitely do not count as evidence about how CEV will choose.
If we only implement CEV for people working for the SIAI and some of them care about the rest of humanity...what’s the difference?