I’ll use the term biphasic cognition to refer to the theory of mind in which cognition can be represented as an abstraction phase, where input data is expressed in the ontology, followed by a reasoning phase, where the data in ontology-space is synced with beliefs and values concerning the environment and then an action is selected. It seems to me like biphasic cognition is the implicit context in which researchers are using the word “ontology,” but I’ve never seen it explicitly defined anywhere so I’m not certain.
This isn’t what I usually picture, but I like it as a simplified toy setup in which it’s easy to say what we even mean by “ontology” and related terms.
Two alternative simplified toy setups in which it’s relatively easy to say what we even mean by “ontology”:
For a Solomonoff inductor, or some limited-compute variant of a Solomonoff inductor, one “hypothesis” about the world is a program. We can think of the variables/functions defined within such a program as its “ontology”. (I got this one from some combination of Abram Demski and Steve Petersen.)
Suppose we have a Bayesian learner, with its own raw sense data as “low-level data”. Assume the Bayesian learner learns a generative model, i.e. one with a bunch of latent variables in it whose values are backed out from sense data. The ontology consists of the latent variables, and their relationships to the sensory data and each other.
The main thing which I don’t think either of those toy models make sufficiently obvious is that “ontology” is mostly about how an agent factors its models/cognition.
This isn’t what I usually picture, but I like it as a simplified toy setup in which it’s easy to say what we even mean by “ontology” and related terms.
Two alternative simplified toy setups in which it’s relatively easy to say what we even mean by “ontology”:
For a Solomonoff inductor, or some limited-compute variant of a Solomonoff inductor, one “hypothesis” about the world is a program. We can think of the variables/functions defined within such a program as its “ontology”. (I got this one from some combination of Abram Demski and Steve Petersen.)
Suppose we have a Bayesian learner, with its own raw sense data as “low-level data”. Assume the Bayesian learner learns a generative model, i.e. one with a bunch of latent variables in it whose values are backed out from sense data. The ontology consists of the latent variables, and their relationships to the sensory data and each other.
The main thing which I don’t think either of those toy models make sufficiently obvious is that “ontology” is mostly about how an agent factors its models/cognition.