What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate.
Which is accepted by virtually no domain expert in AI.
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position
Which is accepted by virtually no domain expert in AI.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position