if you don’t do RL or other training schemes that seem designed to induce agentyness and you don’t do tasks that use an agentic supervision signal, then you probably don’t get agents for a long time
Is this really the case? If you imagine a perfect Oracle AI, which is certainly not agenty, it seems to me that with some simple scaffolding, one could construct a highly agentic system. It would go something along the lines of
Setup API access to ‘things’ which can interact with the real world.
Ask the oracle ‘What would be the optimal action if you want to do <insert-goal> via <insert-api-functions>?’
Do the actions that are outputted.
Some kind of looping mechanism to gain feedback from the world and account for it.
This is my line of reasoning why AIS matters for language models in general.
Could you site the studies that this section was based on. I would be interested in reading further as this seems to be the sticking point for most people when it comes to the topic of GM for embryos.