There is truth to what you say but unfortunately you are letting your frustration become visible.
LOL… indeed.
I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it’ll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more “in the mood” for selfless acts of charity.
I think I agree. If Eliezer didn’t have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power.
(cough The AI Box demonstrations were just warm ups...)
LOL… indeed.
I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it’ll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more “in the mood” for selfless acts of charity.
I think I agree. If Eliezer didn’t have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power.
(cough The AI Box demonstrations were just warm ups...)