Yes. See Google’s Co-Scientist project for an example of an AI scaffolded to have a rumination mode. It is claimed to have matched the theory creation of top labs in two areas of science.
So this rumination mode is probably expensive and only claimed to be effective in the one domain it was engineered for. So far. Based on the scaffolded sort-of-evolutionary “algorithm” they used to recombine and test hypotheses against published empirical results, I’d expect that a general version would work almost as well across domains, once somebody puts effort and some inference money into making it work.
This is cool and valuable, as you say. It’s also extremely dangerous, since this lack is one of the few gaps between current LLMs and the general reasoning abilities of humans—without human ethics and human limitations.
Caveat—I haven’t closely checked the credibility of the co-scientist breakthrough story. I think it’s unlikely to be entirely fake or overstated based on the source, but draw your own conclusions.
I’ve primarily thus far taken my conclusions from this podcast interview with the creators and a deep research report based largely on this paper on the co-scientist project.
Looks like Nathan Labenz, the host of that podcast (and an AI expert in his own right) estimates the inference cost for one cutting-edge hypothesis at $100-1000 for one cutting-edge inference based on the literature in this followup episode (which I do not recommend since it’s focused on the actual biological science)
Yes. See Google’s Co-Scientist project for an example of an AI scaffolded to have a rumination mode. It is claimed to have matched the theory creation of top labs in two areas of science.
So this rumination mode is probably expensive and only claimed to be effective in the one domain it was engineered for. So far. Based on the scaffolded sort-of-evolutionary “algorithm” they used to recombine and test hypotheses against published empirical results, I’d expect that a general version would work almost as well across domains, once somebody puts effort and some inference money into making it work.
This is cool and valuable, as you say. It’s also extremely dangerous, since this lack is one of the few gaps between current LLMs and the general reasoning abilities of humans—without human ethics and human limitations.
Caveat—I haven’t closely checked the credibility of the co-scientist breakthrough story. I think it’s unlikely to be entirely fake or overstated based on the source, but draw your own conclusions.
I’ve primarily thus far taken my conclusions from this podcast interview with the creators and a deep research report based largely on this paper on the co-scientist project.
Looks like Nathan Labenz, the host of that podcast (and an AI expert in his own right) estimates the inference cost for one cutting-edge hypothesis at $100-1000 for one cutting-edge inference based on the literature in this followup episode (which I do not recommend since it’s focused on the actual biological science)