Edit: I mean decision-theoretic symmetry (you should help other intelligent beings achieve their values because if you did not find that path appealing neither would other beings with different values find it appealing to to assist you in achieving your values, thereby cutting off support from any agent with even slight differences from you), not the symmetry of “turning the earth into paperclips”, whatever that would mean.
The original version of the comment being commented on was missing the word “be” between “will” and “AI”. RobertLumley’s dispute could have been intended either to point that out, or to dispute that Clippy is an AI.
Yeah, I think the original comment was edited. Thanks for clearing that up, because I would have been very confused, even knowing what I commented on...
Why should we help you with a project whose long term goal appears to be to turn the earth into paperclips?
Symmetry.
Edit: I mean decision-theoretic symmetry (you should help other intelligent beings achieve their values because if you did not find that path appealing neither would other beings with different values find it appealing to to assist you in achieving your values, thereby cutting off support from any agent with even slight differences from you), not the symmetry of “turning the earth into paperclips”, whatever that would mean.
This will be AI researchers first chance to observe an AI in the wild safely.
I dispute that last word.
I dispute the fourth word.
huh? “that” or “be”? Neither of those make sense to dispute!
“safely”, on the other hand, does.
The original version of the comment being commented on was missing the word “be” between “will” and “AI”. RobertLumley’s dispute could have been intended either to point that out, or to dispute that Clippy is an AI.
Yeah, I think the original comment was edited. Thanks for clearing that up, because I would have been very confused, even knowing what I commented on...
I guess I’ll edit my comment too.