I’d like to comment on your discussion of peer review.
‘Tyler Cowen’s presentation of the criticism then compounds this, entitled ‘Modeling errors in AI doom circles’ (which is pejorative on multiple levels), calling the critique ‘excellent’ (the critique in its title calls the original ‘bad’), then presenting this as an argument for why this proves they should have… submitted AI 2027 to a journal? Huh?’
To me, this response in particular suggests you might misunderstand the point of submitting to journals and receiving peer review. The reason Tyler says they should have submitted it is not because the original model and publication being critiqued is good and especially worthy of publication, it is because it would have received this kind of careful review and feedback before publication, as solicited from an editor independent of the authors, and anonymously. The authors would then be able to improve their models accordingly and the reviewers and editor would decide if their changes were sufficient or request further revisions.
It is a lot of effort to engage with and critique this type of work, and it is unlikely titotal’s review will be read as widely as the original piece, or the updated piece once these criticisms are taken into account. And I also found the responses to his critique slightly unsatisfying—only some of his points were taken on board by the authors, and I didn’t see clear arguments why others were ignored.
Furthermore, it is not reasonable to expect most of the audience consuming AI 2027 and similar to have the necessary expertise and time to go through the methodology as carefully as titotal has done. Those readers are also particularly unlikely to read the critique and use it to shape their takeaways of the original article. However, they are likely to see that there are pages and pages of supplementary information and analysis that looks pretty serious and, based on that, assume the authors know what they are talking about.
You are right that AI research moves fast and tends to not bother waiting for the peer review process to finish, which can for sure be frustratingly time-consuming. However, realistically, a lot of ML research articles that are widely shared and hyped without going through peer review are really bad, don’t replicate and don’t even make an attempt to check the robustness of their findings. The incentive structure changes, leading to researchers overstating their findings on abstracts in order for articles to be picked up on social media, rather than expressing things more cautiously lest their statements be picked apart by the anonymous reviewers. Progress still gets made and very quickly, and the rapid sharing of preprints is definitely really helpful for disseminating ideas early and widely, but this aspect of the field does come with costs and we can’t ignore that.
Finally, going through peer review doesn’t prevent people from performing additional critique and review, like titotal has done, once an article has been published. It is not either-or. In many journals, peer review reports and responses are also published once the article is accepted, so this is also public.
Peer review is by no means a perfect system and I myself think it should be significantly reworked. However, I think the strengths and weaknesses of the existing structures are often not very well understood by the members of this community who argue for it to be gotten rid of wholesale.
I’d like to comment on your discussion of peer review.
‘Tyler Cowen’s presentation of the criticism then compounds this, entitled ‘Modeling errors in AI doom circles’ (which is pejorative on multiple levels), calling the critique ‘excellent’ (the critique in its title calls the original ‘bad’), then presenting this as an argument for why this proves they should have… submitted AI 2027 to a journal? Huh?’
To me, this response in particular suggests you might misunderstand the point of submitting to journals and receiving peer review. The reason Tyler says they should have submitted it is not because the original model and publication being critiqued is good and especially worthy of publication, it is because it would have received this kind of careful review and feedback before publication, as solicited from an editor independent of the authors, and anonymously. The authors would then be able to improve their models accordingly and the reviewers and editor would decide if their changes were sufficient or request further revisions.
It is a lot of effort to engage with and critique this type of work, and it is unlikely titotal’s review will be read as widely as the original piece, or the updated piece once these criticisms are taken into account. And I also found the responses to his critique slightly unsatisfying—only some of his points were taken on board by the authors, and I didn’t see clear arguments why others were ignored.
Furthermore, it is not reasonable to expect most of the audience consuming AI 2027 and similar to have the necessary expertise and time to go through the methodology as carefully as titotal has done. Those readers are also particularly unlikely to read the critique and use it to shape their takeaways of the original article. However, they are likely to see that there are pages and pages of supplementary information and analysis that looks pretty serious and, based on that, assume the authors know what they are talking about.
You are right that AI research moves fast and tends to not bother waiting for the peer review process to finish, which can for sure be frustratingly time-consuming. However, realistically, a lot of ML research articles that are widely shared and hyped without going through peer review are really bad, don’t replicate and don’t even make an attempt to check the robustness of their findings. The incentive structure changes, leading to researchers overstating their findings on abstracts in order for articles to be picked up on social media, rather than expressing things more cautiously lest their statements be picked apart by the anonymous reviewers. Progress still gets made and very quickly, and the rapid sharing of preprints is definitely really helpful for disseminating ideas early and widely, but this aspect of the field does come with costs and we can’t ignore that.
Finally, going through peer review doesn’t prevent people from performing additional critique and review, like titotal has done, once an article has been published. It is not either-or. In many journals, peer review reports and responses are also published once the article is accepted, so this is also public.
Peer review is by no means a perfect system and I myself think it should be significantly reworked. However, I think the strengths and weaknesses of the existing structures are often not very well understood by the members of this community who argue for it to be gotten rid of wholesale.