does not have any integrated world-model that it can loop on to do novel long-term planning
I am interested in more of your thoughts on this part, because I do not grok the necessity of a single world-model or long-term planning (though I’m comfortable granting that they would make it much more effective). Are these independent requirements, or are they linked somehow? Would an explanation look like:
Because the chunks of the world model are small, foom won’t meaningfully increase capabilities past a certain point.
Or maybe:
Without long-term planning, the disproportionate investment in increasing capabilities that leads to foom never makes sense.
I suppose the obvious follow up question is: do you think there are any interesting ideas being pursued currently? Even nascent ones?
Two that I (a layperson) find interesting are the interpretability/transparency and neural ODE angles, though both of these are less about capability than about understanding what makes capability work at all.