[Question] Impact of ” ‘Let’s think step by step’ is all you need”?

Title from: https://​​twitter.com/​​arankomatsuzaki/​​status/​​1529278581884432385.

This is not quite a linkpost for this paper.

Nonetheless, the abstract is:

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs’ ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding ``Let’s think step by step″ before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with an off-the-shelf 175B parameter model. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted through simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.

Consider the following image, taken from the paper:

According to the paper linked above, this technique of propmpting gpt-3 with “Let’s think step by step” and then, after it thinks out loud, with “Therefore, the final answer is”, increases the performance of gpt-3 very significantly. This seems to indicate that the algorithms we currently use benefit from using the i/​o space as cognitive workspace. How much progress is likely to be made by giving large neural networks cognitive workspace that is more natural? Are there large language models today that have internal state, and the ability reason on it for as long as they deem necessary (or at least to vary the amount of resources devoted) in order to come up with a better response?