I never read the paper and haven’t looked closely into the recent news and events about it. But, I will admit I didn’t (and still don’t) find the general direction and magnitude of the results implausible, even if the actual paper has no value or validity and is fraudulent. For about a decade, leading materials Informatics companies have reported that use of machine learning for experimental design in materials and chemicals research reduces the number of experiments needed to reach a target level of performance by 50-70%. The now-presumably-fraudulent MIT paper mostly seemed to claim the same, but in a way that is much broader and deeper.
So: yes, given recent news we should regard this particular paper as providing essentially zero information. But also, if you were paying attention to prior work on AI in materials discovery, and the case studies and marketing claims made regarding same, then the result was also reasonably on-trend. As for the claimed effects on the people doing materials research, I have no idea, I hadn’t seen it studied before; that’s what I’m disappointed about, and I really would like to know the reality.
I think aside from the general implausibility of the effect sizes and the claimed AI tech (GANs?) delivering those effect sizes across so many areas of materials, one of the odder claims which people highlighted at the time was that supposedly the best users got a lot more productivity enhancement than the worst ones. This is pretty unusual: usually low performers get a lot more out of AI assistance, for obvious reasons. And this lines up with what I see anecdotally for LLMs: until very recently, possibly, they were just a lot more useful for people not very good at writing or other stuff, than for people like me who are.
I never read the paper and haven’t looked closely into the recent news and events about it. But, I will admit I didn’t (and still don’t) find the general direction and magnitude of the results implausible, even if the actual paper has no value or validity and is fraudulent. For about a decade, leading materials Informatics companies have reported that use of machine learning for experimental design in materials and chemicals research reduces the number of experiments needed to reach a target level of performance by 50-70%. The now-presumably-fraudulent MIT paper mostly seemed to claim the same, but in a way that is much broader and deeper.
So: yes, given recent news we should regard this particular paper as providing essentially zero information. But also, if you were paying attention to prior work on AI in materials discovery, and the case studies and marketing claims made regarding same, then the result was also reasonably on-trend. As for the claimed effects on the people doing materials research, I have no idea, I hadn’t seen it studied before; that’s what I’m disappointed about, and I really would like to know the reality.
I think aside from the general implausibility of the effect sizes and the claimed AI tech (GANs?) delivering those effect sizes across so many areas of materials, one of the odder claims which people highlighted at the time was that supposedly the best users got a lot more productivity enhancement than the worst ones. This is pretty unusual: usually low performers get a lot more out of AI assistance, for obvious reasons. And this lines up with what I see anecdotally for LLMs: until very recently, possibly, they were just a lot more useful for people not very good at writing or other stuff, than for people like me who are.