I don’t think it’s time yet to rule out potential AGI-worthiness of scaffolded LLMs of the kind Imbue develops (they’ve recently secured 10K H100s too). Yes, there are glaring missing pieces to their cognition, but if LLMs do keep getting smarter with scale, they might be able to compensate for some cognitive limitations with other cognitive advantages well enough to start the ball rolling.
There’s Mistral that recently published weights for a model that is on par with GPT-3.5 not just on benchmarks and has probably caught up with Claude 2 with another model (Mistral-medium). Mistral’s Arthur Mensch seemingly either disbelieves existential risk or considers the future to be outside of his ontology.
I don’t think it’s time yet to rule out potential AGI-worthiness of scaffolded LLMs of the kind Imbue develops (they’ve recently secured 10K H100s too). Yes, there are glaring missing pieces to their cognition, but if LLMs do keep getting smarter with scale, they might be able to compensate for some cognitive limitations with other cognitive advantages well enough to start the ball rolling.