This sounds like just a special case of the principle that Friendly AI should believe what is true and want what we want, rather than believe what we believe and want what we profess to want.
Most of the specific methods by which a UFAI could actually destroy us could also be employed by unfriendly humans. Adding an AI to the scenario presumably makes it worse mainly by amplifying the speed with which the scenario plays out and adding the unpredictability of an alien mindset.
Adding an AI to the scenario presumably makes it worse mainly by amplifying the speed with which the scenario plays out and adding the unpredictability of an alien mindset.
I think it just confused the question. It is unclear whether the OP is a FAI discussion or a patternism discussion, and now there’s people talking about both.
This sounds like just a special case of the principle that Friendly AI should believe what is true and want what we want, rather than believe what we believe and want what we profess to want.
Most of the specific methods by which a UFAI could actually destroy us could also be employed by unfriendly humans. Adding an AI to the scenario presumably makes it worse mainly by amplifying the speed with which the scenario plays out and adding the unpredictability of an alien mindset.
Disagree. There are probably many strategies that a merely human-level intelligence cannot carry out or even think of.
Oops, good point. I should have said
“Most of the specific methods we’ve thought of by which a UFAI could actually destroy us could also be employed by unfriendly humans.”
I think it just confused the question. It is unclear whether the OP is a FAI discussion or a patternism discussion, and now there’s people talking about both.
Upvoted for being the maximally concise explanation of what FAI is.
It is too concise to be a description of what an FAI is. It doesn’t seem to do anything except believe.
Fixed to add action. The new one is even shorter.
philosophical golf
I like it!