What is missing from this post is a realistic consideration of scope. If CFAR is correct, then their pivot is both extremely important and pretty urgent. The utility difference is so large that it would be defensible to endorse AI safety as the primary goal, even if they expect it to probably not be the biggest problem.
What is missing from this post is a realistic consideration of scope. If CFAR is correct, then their pivot is both extremely important and pretty urgent. The utility difference is so large that it would be defensible to endorse AI safety as the primary goal, even if they expect it to probably not be the biggest problem.