I don’t think you can automatically call a suboptimal decision a mistake.
Huh? You wouldn’t call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this “suboptimal decision” was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)
And I’m much more willing to trust a FAI with that call than any human.
Edited:
There’s no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I’d agree.
I think there’s potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.
You wouldn’t call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision?
I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.
If I employed a different general strategy with a similar level of simplicity, it would be less optimal.
Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.
I think there’s potential for severe error in the logic present in the text of the post
Of course there is. If you can show a specific error, that would be great.
Huh? You wouldn’t call a decision that results in an unnecessary loss of life a mistake, but rather a suboptimal decision? Note that I altered the hypothetical situation in the comment and this “suboptimal decision” was labeled a mistake in the event that a 3rd party would come up with a superior decision (ie. one that would save all the lives)
Edited: There’s no FAI we can trust yet and this particular detail seems to be about the friendliness of an AI, so your belief seems a little out of place in this context, but nevermind that since if there were an actual FAI, I suppose I’d agree.
I think there’s potential for severe error in the logic present in the text of the post and I find it proper to criticize the substance of this post, despite it being 4 years old.
Anyway for an omniscient being not putting any weight on the potential of error would seem reasonable.
I might decide to take a general, consistent strategy due to my own limitations. In this example, the limitation is that if I feel justified in engaging in this sort of behavior on occasion, I will feel justified employing it on other occasions with insufficient justifications.
If I employed a different general strategy with a similar level of simplicity, it would be less optimal.
Other strategies exist that are closer to optimal, but my limitations preclude me from employing them.
Of course there is. If you can show a specific error, that would be great.