current oversights of the ai safety community, as I see it:
LLMs vs. Agents. the focus on LLMs rather than agents (agents are more dangerous)
Autonomy Preventable. the belief that we can prevent agents from becoming autonomous (capitalism selects for autonomous agents)
Autonomy Difficult. the belief that only big AI labs can make autonomous agents (millions of developers can)
Control. the belief that we’ll be able to control/set goals of autonomous agents (they’ll develop self-interest no matter what we do).
Superintelligence. the focus on agents which are not significantly more smart/capable than humans (superintelligence is more dangerous)
current oversights of the ai safety community, as I see it:
LLMs vs. Agents. the focus on LLMs rather than agents (agents are more dangerous)
Autonomy Preventable. the belief that we can prevent agents from becoming autonomous (capitalism selects for autonomous agents)
Autonomy Difficult. the belief that only big AI labs can make autonomous agents (millions of developers can)
Control. the belief that we’ll be able to control/set goals of autonomous agents (they’ll develop self-interest no matter what we do).
Superintelligence. the focus on agents which are not significantly more smart/capable than humans (superintelligence is more dangerous)