I’ll use the term biphasic cognition to refer to the theory of mind in which cognition can be represented as an abstraction phase, where input data is expressed in the ontology, followed by a reasoning phase, where the data in ontology-space is synced with beliefs and values concerning the environment and then an action is selected. It seems to me like biphasic cognition is the implicit context in which researchers are using the word “ontology,” but I’ve never seen it explicitly defined anywhere so I’m not certain.
This isn’t what I usually picture, but I like it as a simplified toy setup in which it’s easy to say what we even mean by “ontology” and related terms.
Two alternative simplified toy setups in which it’s relatively easy to say what we even mean by “ontology”:
For a Solomonoff inductor, or some limited-compute variant of a Solomonoff inductor, one “hypothesis” about the world is a program. We can think of the variables/functions defined within such a program as its “ontology”. (I got this one from some combination of Abram Demski and Steve Petersen.)
Suppose we have a Bayesian learner, with its own raw sense data as “low-level data”. Assume the Bayesian learner learns a generative model, i.e. one with a bunch of latent variables in it whose values are backed out from sense data. The ontology consists of the latent variables, and their relationships to the sensory data and each other.
The main thing which I don’t think either of those toy models make sufficiently obvious is that “ontology” is mostly about how an agent factors its models/cognition.
its notion of regulators generally does not line up with neural networks.
When alignment researchers talk about ontologies and world models and agents, we’re (often) talking about potential future AIs that we think will be dangerous. We aren’t necessarily talking about all current neural networks.
A common-ish belief is that future powerful AIs will be more naturally thought of as being agentic and having a world model. The extent to which this will be true is heavily debated, and gooder regulator is kinda part of that debate.
Biphasic cognition might already be an incomplete theory of mind for humans
Nothing wrong with an incomplete or approximate theory, as long as you keep an eye on the things that it’s missing and whether they are relevant to whatever prediction you’re trying to make.
I see most work like you describe about ontology as more extra abstractions to reason about ontologies on top of the basic thing that ontologies are.
So what is ontology fundamentally? Simply the categorization of the world, telling apart one thing from another. Something as simple as a sensor that flips the voltage on an output wire high or low based on whether there’s more than X lumens of light hitting the sensor is creating an ontology by establishing a relationship between the voltage on the output wire and the environment surrounding the sensor.
Given ontology can be a pretty simple thing, I don’t know if folks are confused about ontology so much as perhaps sometimes confused about how complex an ontology they can claim a system has.
This isn’t what I usually picture, but I like it as a simplified toy setup in which it’s easy to say what we even mean by “ontology” and related terms.
Two alternative simplified toy setups in which it’s relatively easy to say what we even mean by “ontology”:
For a Solomonoff inductor, or some limited-compute variant of a Solomonoff inductor, one “hypothesis” about the world is a program. We can think of the variables/functions defined within such a program as its “ontology”. (I got this one from some combination of Abram Demski and Steve Petersen.)
Suppose we have a Bayesian learner, with its own raw sense data as “low-level data”. Assume the Bayesian learner learns a generative model, i.e. one with a bunch of latent variables in it whose values are backed out from sense data. The ontology consists of the latent variables, and their relationships to the sensory data and each other.
The main thing which I don’t think either of those toy models make sufficiently obvious is that “ontology” is mostly about how an agent factors its models/cognition.
When alignment researchers talk about ontologies and world models and agents, we’re (often) talking about potential future AIs that we think will be dangerous. We aren’t necessarily talking about all current neural networks.
A common-ish belief is that future powerful AIs will be more naturally thought of as being agentic and having a world model. The extent to which this will be true is heavily debated, and gooder regulator is kinda part of that debate.
Nothing wrong with an incomplete or approximate theory, as long as you keep an eye on the things that it’s missing and whether they are relevant to whatever prediction you’re trying to make.
I see most work like you describe about ontology as more extra abstractions to reason about ontologies on top of the basic thing that ontologies are.
So what is ontology fundamentally? Simply the categorization of the world, telling apart one thing from another. Something as simple as a sensor that flips the voltage on an output wire high or low based on whether there’s more than X lumens of light hitting the sensor is creating an ontology by establishing a relationship between the voltage on the output wire and the environment surrounding the sensor.
Given ontology can be a pretty simple thing, I don’t know if folks are confused about ontology so much as perhaps sometimes confused about how complex an ontology they can claim a system has.