Per its LinkedIn it’s a tiny 2-10 member lab. Their only previous contribution was Zochi, a model for generating experiments and papers, one seemingly being accepted into ACL 2025. But there’s barely any transparency on what their model actually is, even on their technical report.
I personally see red flags with Intology too, main one being that such a performance form a tiny lab is hard to believe. On RE-Bench they compare against Sonnet 4.5, which has the best performance thus far per its model card, so them achieving superhuman results seems strange. Then there’s the fact there seems to be no paper as it’s their early results, the fact these results are all self-reported with minimal verification (a single Tsinghua student checked the kernels), and we have no technical details on the system itself or even what the underlying model is.
Another smaller lab with seemingly big contributions I can think of would be Sakana AI,but even they have far more employees and much more contributions + actual detailed papers for their models. And even they had an issue at one point where their CUDA Engineer system reported a 100x CUDA speedup that turned out to be cheating. Here Intology claims to get 20x-100x speedups like candy.
I just don’t understand why the people there would lie about something like this. This isn’t even very believable. It looks like the guy who founded it was a bright ML PhD and if he’s not telling the truth why would he throw away his reputation over this? Maybe it’s real but I’m pretty skeptical. I looked at their Zochi paper and I don’t see that they offered any proof that the papers they attributed to Zochi were written by Zochi.
It’s happened before, see Reflexion (I hope I’m remembering the name right) hyping up their supposed real time learner model only for it to be a lie. Tons of papers overpromise and don’t seem to get lasting consequences. But yeah I also don’t know why Intology would be lying, but the fact there’s no paper and that their deployment plans are waitlist-based and super vague (and the fact no one ever talks about zochi despite their beta program being old by this point) means we likely won’t ever know. They say they plan on sharing Locus’ discoveries “in the coming months”, but until they actually do there’s no way to verify past checking their kernel samples on GitHub.
For now I’m heavily, heavily skeptical. Agentic scaffolds don’t usually magically 10x frontier models’ performance, and we know the absolute best current models are still far from RE-Bench human performance (per their model cards, in which they also use proper scaffolding for the benchmark).
Per its LinkedIn it’s a tiny 2-10 member lab. Their only previous contribution was Zochi, a model for generating experiments and papers, one seemingly being accepted into ACL 2025. But there’s barely any transparency on what their model actually is, even on their technical report.
I personally see red flags with Intology too, main one being that such a performance form a tiny lab is hard to believe. On RE-Bench they compare against Sonnet 4.5, which has the best performance thus far per its model card, so them achieving superhuman results seems strange. Then there’s the fact there seems to be no paper as it’s their early results, the fact these results are all self-reported with minimal verification (a single Tsinghua student checked the kernels), and we have no technical details on the system itself or even what the underlying model is.
Another smaller lab with seemingly big contributions I can think of would be Sakana AI,but even they have far more employees and much more contributions + actual detailed papers for their models. And even they had an issue at one point where their CUDA Engineer system reported a 100x CUDA speedup that turned out to be cheating. Here Intology claims to get 20x-100x speedups like candy.
I just don’t understand why the people there would lie about something like this. This isn’t even very believable. It looks like the guy who founded it was a bright ML PhD and if he’s not telling the truth why would he throw away his reputation over this? Maybe it’s real but I’m pretty skeptical. I looked at their Zochi paper and I don’t see that they offered any proof that the papers they attributed to Zochi were written by Zochi.
It’s happened before, see Reflexion (I hope I’m remembering the name right) hyping up their supposed real time learner model only for it to be a lie. Tons of papers overpromise and don’t seem to get lasting consequences. But yeah I also don’t know why Intology would be lying, but the fact there’s no paper and that their deployment plans are waitlist-based and super vague (and the fact no one ever talks about zochi despite their beta program being old by this point) means we likely won’t ever know. They say they plan on sharing Locus’ discoveries “in the coming months”, but until they actually do there’s no way to verify past checking their kernel samples on GitHub.
For now I’m heavily, heavily skeptical. Agentic scaffolds don’t usually magically 10x frontier models’ performance, and we know the absolute best current models are still far from RE-Bench human performance (per their model cards, in which they also use proper scaffolding for the benchmark).
people lie about some crazy shit