its notion of regulators generally does not line up with neural networks.
When alignment researchers talk about ontologies and world models and agents, we’re (often) talking about potential future AIs that we think will be dangerous. We aren’t necessarily talking about all current neural networks.
A common-ish belief is that future powerful AIs will be more naturally thought of as being agentic and having a world model. The extent to which this will be true is heavily debated, and gooder regulator is kinda part of that debate.
Biphasic cognition might already be an incomplete theory of mind for humans
Nothing wrong with an incomplete or approximate theory, as long as you keep an eye on the things that it’s missing and whether they are relevant to whatever prediction you’re trying to make.
When alignment researchers talk about ontologies and world models and agents, we’re (often) talking about potential future AIs that we think will be dangerous. We aren’t necessarily talking about all current neural networks.
A common-ish belief is that future powerful AIs will be more naturally thought of as being agentic and having a world model. The extent to which this will be true is heavily debated, and gooder regulator is kinda part of that debate.
Nothing wrong with an incomplete or approximate theory, as long as you keep an eye on the things that it’s missing and whether they are relevant to whatever prediction you’re trying to make.