Maybe something like “non-LLM AGIs are a thing too and we know from the human brain that they’re going to be much more data-efficient than LLM ones”; it feels like the focus in conversation has been so strongly on LLM-descended AGIs that I just stopped thinking about that.
I’m curious what’s the argument that felt most like “oh”
Maybe something like “non-LLM AGIs are a thing too and we know from the human brain that they’re going to be much more data-efficient than LLM ones”; it feels like the focus in conversation has been so strongly on LLM-descended AGIs that I just stopped thinking about that.