I never read the paper and haven’t looked closely into the recent news and events about it. But, I will admit I didn’t (and still don’t) find the general direction and magnitude of the results implausible, even if the actual paper has no value or validity and is fraudulent. For about a decade, leading materials Informatics companies have reported that use of machine learning for experimental design in materials and chemicals research reduces the number of experiments needed to reach a target level of performance by 50-70%. The now-presumably-fraudulent MIT paper mostly seemed to claim the same, but in a way that is much broader and deeper.
So: yes, given recent news we should regard this particular paper as providing essentially zero information. But also, if you were paying attention to prior work on AI in materials discovery, and the case studies and marketing claims made regarding same, then the result was also reasonably on-trend. As for the claimed effects on the people doing materials research, I have no idea, I hadn’t seen it studied before; that’s what I’m disappointed about, and I really would like to know the reality.
Not sure what he’s done on AI since, but Tim Urban’s 2015 AI blog post series mentions how he was new to AI or AI risk and spent a little under a month studying and writing those posts. I re-read them a few months ago and immediately recommended them to some other people with no prior AI knowledge, because they have held up remarkably well.