Those seem like good suggestions if we had a means of slowing the current paradigm and making/keeping it non-agentic.
Do you know of any ideas for how we convince enough people to do those things? I can see a shift in public opinion in the US and even a movement for “don’t make AI that can replace people” which would technically translate to no generally intelligent learning agents.
But I can’t see the whole world abiding by such an agreement, because general tool AI like LLMs is just too easily converted into an agent as it keeps getting better.
Developing new tech in time to matter without a slowdown seems doomed to me.
I would love to be convinced that this is an option! But at this point it looks 80%-plus likely that LLMs-plus-scaffolding-or-related-breakthroughs get us to AGI within five years or a little more if global events work against it, which makes starting from scratch nigh impossible and even substantially different approaches very unlikely to catch up.
The exception is the de-slopifying tools you’ve discussed elsewhere. That approach has the potential to make progress on the current path while also reducing the risk of slop-induced doom. That doesn’t solve actual misalignment as in AI-2027, but it would help other alignment techniques work more predictably and reliably.
Those seem like good suggestions if we had a means of slowing the current paradigm and making/keeping it non-agentic.
Do you know of any ideas for how we convince enough people to do those things? I can see a shift in public opinion in the US and even a movement for “don’t make AI that can replace people” which would technically translate to no generally intelligent learning agents.
But I can’t see the whole world abiding by such an agreement, because general tool AI like LLMs is just too easily converted into an agent as it keeps getting better.
Developing new tech in time to matter without a slowdown seems doomed to me.
I would love to be convinced that this is an option! But at this point it looks 80%-plus likely that LLMs-plus-scaffolding-or-related-breakthroughs get us to AGI within five years or a little more if global events work against it, which makes starting from scratch nigh impossible and even substantially different approaches very unlikely to catch up.
The exception is the de-slopifying tools you’ve discussed elsewhere. That approach has the potential to make progress on the current path while also reducing the risk of slop-induced doom. That doesn’t solve actual misalignment as in AI-2027, but it would help other alignment techniques work more predictably and reliably.