“1) the paperclip maximizer is not a paperclip maximizer but a different kind of unfriendly AI”
Being a paperclip maximizer is about values, not about decision theory. You can want to maximize paperclips but still use some of acausal decison theory that will cooperate with decision makers that would cooperate with paperclippers, as in cousin_it’s response.
“1) the paperclip maximizer is not a paperclip maximizer but a different kind of unfriendly AI”
Being a paperclip maximizer is about values, not about decision theory. You can want to maximize paperclips but still use some of acausal decison theory that will cooperate with decision makers that would cooperate with paperclippers, as in cousin_it’s response.
That seems true, thanks for the correction.