I am less convinced by the prediction market style of verification
I’m also not super convinced, but I do think the problem of verifying solutions is a big one, so I wanted to put out some alternate answer out there.
the role AI tools themselves can play in easing the mentorship bottleneck
For guiding up-and-coming researchers I definitely agree that existing AIs can help, although I also feel that each person should find something that works for them.
For using AIs to review submissions, I’m not sure the AIs are good enough yet to do a full review, but maybe they can significantly reduce the number of low-effort papers that a researcher has to review. E.g. use an LLM to check for typos, style, prior work, whether the paper actually answers the question, etc.
I’m also not super convinced, but I do think the problem of verifying solutions is a big one, so I wanted to put out some alternate answer out there.
For guiding up-and-coming researchers I definitely agree that existing AIs can help, although I also feel that each person should find something that works for them.
For using AIs to review submissions, I’m not sure the AIs are good enough yet to do a full review, but maybe they can significantly reduce the number of low-effort papers that a researcher has to review. E.g. use an LLM to check for typos, style, prior work, whether the paper actually answers the question, etc.