“I don’t think it’s hard, as a question of computer science, to get an AI to prioritize the goals you intended.”
MIRI has worked on that problem for two decades, and failed to solve it. I am shocked that someone who says they are a research scientist on the Scalable Alignment team at Google DeepMind could so cavalierly and naïvely dismiss the difficulty of the alignment problem.
Is there any hope for legal professionals? Attorneys are TRAINED to start with the bottom line, a predetermined conclusion, and then to backfill the reasons. The last thing their clients want them to do is to objectively weigh the evidence and then come to a conclusion.
The only highly educated Flat Earthers I have ever encountered have been attorneys. This, I believe, is not a coincidence.