This is a good post from Sarah Constantin explaining why her expected timeline to agentic AGI is long (> 10 years). Topics discussed include whether LLMs and other current major research directions will endow AIs with adequate world models, causal inference, and goal robustness across ontological shifts.
[Link] Sarah Constantin: “Why I am Not An AI Doomer”
Link post
This is a good post from Sarah Constantin explaining why her expected timeline to agentic AGI is long (> 10 years). Topics discussed include whether LLMs and other current major research directions will endow AIs with adequate world models, causal inference, and goal robustness across ontological shifts.