The researchers let PaLM parse, prompt, and filter its own outputs, to get a ‘chain-of-thought’ that is a more reliable epistemic methodology for the AI to follow, compared to its own once-through assumption. I stand by my claim—“AGI soon, but narrow works better”, and “prompted, non-temporal narrow AI with frozen weights will be able to do almost everything we feel comfortable letting them do.”
Well, narrow AI just FOOMed in its pants a little more: “Large Language Models can Self-Improve”
The researchers let PaLM parse, prompt, and filter its own outputs, to get a ‘chain-of-thought’ that is a more reliable epistemic methodology for the AI to follow, compared to its own once-through assumption. I stand by my claim—“AGI soon, but narrow works better”, and “prompted, non-temporal narrow AI with frozen weights will be able to do almost everything we feel comfortable letting them do.”