In case of AI safety, the analogy maps through things like past research results, or general abilities to reason and make arguments. You can check the claim that e.g. Eliezer historically made many good non-trivial arguments about AI, where he was the first person, or one of the first people to make them. While the checking part is less easy than in chess, I would say it’s roughly comparable to high level math, or good philosophy.
In case of AI safety, the analogy maps through things like past research results, or general abilities to reason and make arguments. You can check the claim that e.g. Eliezer historically made many good non-trivial arguments about AI, where he was the first person, or one of the first people to make them. While the checking part is less easy than in chess, I would say it’s roughly comparable to high level math, or good philosophy.