Epistemic status: quite uncertain, feel like the idea that ontologies are inadequate is a hundred-dollar bill on the floor of Grand Central Terminal. Also fairly sure that I’m constructing a strawman but probably good for someone to be pedantically skeptical about ontologies
Ontology within the parlance of AI alignment is a term which formalizes the set of objects, categories, actions, etc., over which minds reason. The idea presupposes a theory of mind (indirect realism) in which rather than directly interfacing with the environment around us, the mind instead takes in sensory data and constructs an internal ontology. The actions and predictions of a cognitive system are then functions of the latent objects in the ontology.
Ontologies are a useful concept in agent modeling because an agent’s utility function and beliefs are easily definable over their mental objects. The elicitation of an agent’s ontology is also an ambitious goal of interpretability, since it would provide a clean way to represent a neural network as a decision-theoretic agent. The field of AI alignment particularly deals with the problems of ontology identification and ontology mismatch: ontology identification is the problem of eliciting the ontology from a neural network (or other cognitive system), while ontology mismatch is the problem of translating human concepts and values to the ontology of the neural network.
This post seeks to challenge the usefulness of ontologies in framing the way artificial minds operate, specifically within the AI alignment literature. It seems like a lot of researchers are working on ontology identification but I haven’t seen any posts that make a compelling case for why we should expect ontologies to emerge in the first place. If you’re unfamiliar with ontologies, it may behoove you to skim a couple posts under the Ontology tag on Lesswrong before continuing.
Defining ontologies
In this section, I present a (hopefully impartial) natural-language description of ontologies based on a coupledefinitions across Lesswrong. It’s difficult to create an adequate formalism for ontologies, in large part because they are intuited through our own conscious experience rather than based on any empirical data. This is the definition upon which I establish a mathematical formalism after explicating:
An ontology is the system containing the functions, relations, and objects used in the mind’s internal formal language.
This definition also hints at the idea that each ontology has a corresponding reasoner that uses the formal language. The reasoner is where the beliefs and values of the system live, and it generates actions based on perceptual input. I’ll use the term biphasic cognition to refer to the theory of mind in which cognition can be represented as an abstraction phase, where input data is expressed in the ontology, followed by a reasoning phase, where the data in ontology-space is synced with beliefs and values concerning the environment and then an action is selected. It seems to me like biphasic cognition is the implicit context in which researchers are using the word “ontology,” but I’ve never seen it explicitly defined anywhere so I’m not certain.
With a more formal framing, we can define the ontology of a cognitive system F as a function O:input space→latent space such there exists a reasoning function R:latent space→output space such that R satisfies the “reasoner conditions” and F(x)=R(O(x)). There are two interpretations of this, a looser one and a stricter one:
Any cognitive system F can merely be represented as R(O(x)), meaning the computational path of F does not necessarily look like the composition of O and R.
Any computable cognitive system F can be broken down as (f1∘f2∘…∘fn)(x) where each fi is a basic arithmetic operation. Let O be an ontology of F iff O(x)=(f1∘…∘fk)(x) for some k, and R(z)=(fk+1∘…∘fn)(z).
This distinction is important because (2) would imply the ontology is already present in the program, and we just need to find the cutoff k. Definition 1 encapsulates a greater number of systems than Definition 2. For Definition 2 especially, this picture is intuitively how I think about biphasic cognition:
For a function to be a reasoner, it must be isomorphic to some process that looks like symbolic reasoning, like a Bayesian network or Markov logic network. By isomorphic I mean that there exists a bijective mapping between the parameters of the reasoner and the parameters of a corresponding symbolic reasoning function with identical output. I don’t think there are any formal demands to place on ontologies since it just transmutes the input information. Surely some ontologies are better than others, in that they more efficiently encode information or keep more relevant information, but those are demands on quality, not ontology-ness.
Both Definition 1 and Definition 2 square pretty well with how we think about biphasic cognition in humans, since the idea of ontologies in the first place is based on our conscious thought. Definition 2 is a bit harder to match up, since we don’t have the source code for our brains, but from our own conscious experience we infer that the intermediate step between O and R actually seems to occur (as in, not only can humans be represented as R(O(X)), but that it seems like this is the actual “computational” path our brains follow).
Reasons why biphasic cognition may be incorrect
Assuming that my framing of biphasic cognition and ontologies represents how others view the subject, here are some reasons why it might be incorrect or incomplete, specifically when applied to modern neural networks.
Biphasic cognition might already be an incomplete theory of mind for humans
Biphasic cognition is just a model of cognition, and not one that provides much predictive power. The motivation is based on how we think our minds operate from the inside. Neuroscientists don’t seem to really know where or how conscious thought occurs in the first place, and some philosophers think that consciousness is entirely an illusion. This is mostly to say that the existence of doubts around the realness of conscious/symbolic thought in humans should entail even larger a priori doubts about the emergence of conscious/symbolic thought in artificial minds.
This is especially important because some approaches to alignment assume that our own ontology and reasoning can be elicited and formally represented. In the ELK report, for example, much of the approach to eliciting knowledge involves translating thoughts from the Bayes net of an AI system to the Bayes net of a human. If the framing of biphasic cognition which we’ve pieced together from our conscious experience does not actually reflect what is happening mechanistically, then we’re in a pickle.
Biphasic cognition lacks empirical evidence for generalization
Even if biphasic cognition is a good model for human cognition, there is little empirical evidence thus far to suggest that the framing translates cleanly to other minds. And most critically, the theory itself lacks predictive power, making it susceptible to methodological traps: one is reminded of how the theory of phlogiston was dominant among chemists in the 18th century. Phlogiston arose from natural philosophers trying to unite the Aristotelian element of fire with the budding principles of chemistry. Because the theory was still being developed, observations were explained by augmenting the properties of phlogiston. These claims were not challenged because it was assumed phlogiston existed and just needed its properties to be formalized. Chemists didn’t realize that they were viewing combustion through an inadequate paradigm.
Within reinforcement learning, some architectures are built with world-modeling in mind, either via directly constructing a POMDP or by separately training a world model in an unsupervised/self-supervised fashion and then training an agent to interact with the representation created by the world model (here and here). The existence and success of these architectures do not contradict my main point: that clear-cut biphasic cognition probably won’t emerge naturally. Furthermore, these architectures are not robust against the many-ontologies framing I detail below.
Speculation
No ontologies, just floating abstractions
Depending on how stringent our requirements for a function to be a “reasoner” are, we may end up unable to represent neural networks with biphasic cognition. In this case, abstractions exist as subfilters of information spaced along the entire filter that is the neural network rather than localizing as a single ontology.
Below is a diagram of increasingly-detailed feature maps in a CNN (note: this widely-circulated image probably wasn’t sourced from a real CNN). I think that while it’s tempting to think that the feature assembly stops after the high-level features are constructed and that any subsequent computation is more similar to logical inference, it’s more likely that the “reasoning” occurs via the same mode of thought as the feature assembly if the subsequent computation is still occurring within convolutional layers. By this I mean that we picture our conscious selves as interfacing with the data at this point, but with neural networks it seems like the data just keeps getting filtered and abstracted through weird functions of latent variables.
Many ontologies
On the other hand, we might end up with very strong representation techniques, to the point where we’re able to decompose neural networks into ontologies and reasoners. A potential issue in this case is that there exist multiple ontology-reasoner decompositions of a neural network. In Definition 1 of biphasic cognition stated above, this would look like the existence of multiple (O,R) pairs where the outputs of the ontologies are not isomorphic to each other, while for Definition 2, the existence of multiple points k in the computational path about which the ontology and reasoner can be delineated.
I’ll note that this possibility doesn’t dismiss the existence of ontologies, but it does run contrary to the typical framing of ontologies, and wasn’t addressed in any of the papers and Lesswrong posts I read while doing research for this post. My thoughts on multiple ontologies are outside of the scope of this post but I hope to investigate later. One concern I do have, though, is that if the math ends up too strong we could just represent O and R for any k, which would call into question the usefulness of the ontology framing–at that point, it almost reverts to the floating abstraction picture.
This issue might also just be a problem with the formalism I’ve laid out, but this is all I have to say on the topic right now.
A note on the gooder regulator theorem
One result that I came across while searching for representation theorems concerning ontologies was the gooder regulator theorem, which shows that an optimal “regulator” (synonymous with “agent” or “cognitive system” in this context) will, given some conditions, reconstruct a world model from its training data that is isomorphic to the Bayesian posterior of what the world should look like given the input data. A stronger version of this statement would invalidate most of the points I make in this post, but I think that the theorem is actually wholly inapplicable to real AI systems.
In John Wentworth’s original post, he provides us with the following setup: imagine you have a regulator consisting of a model function M and an output function R, which acts within system S to optimize some target Z. The regulator is first given X, training data containing only the set of variables it can observe from the system. It can only store information about X in the model function M. Then, the regulator is provided with “test” data Y along with an optimization problem (“game”) to solve for each item in Y.
The idea is that if M minimizes information retained about X and also performs optimally on tasks and data in Y, then the output of M is isomorphic to the Bayesian posterior distribution of the state of S given the input. Thus, the optimal regulator literally reconstructs an optimal model of the system state given a noisy (or noiseless) input. John’s proof can be found here.
While the math for the theorem is sound, its notion of regulators generally does not line up with neural networks. First, the learned parameters are limited to M, meaning that R can’t retain any information about X. This means that the parts of a neural network optimized via backpropagation (i.e. all of it, usually) are under the umbrella of M, so this theorem doesn’t show anything about optimal world models arising inside of neural networks, only that the output will be optimal if used as the encoder in some larger system. Additionally, the regulator is forced to optimize over an arbitrarily large set of games, which means that it can’t afford to discard any information in X, so the model must be a lossless compression of X. Real neural networks are not trained across such games, however, so the system’s posterior distribution given X will not be selected for, even in models which are optimal over their test data.
Recap/TL;DR
In this post, I present a natural-language definition of ontologies and use the definition to construct two mathematical formalisms of ontologies that provide a specific picture of what an ontology in a neural network could look like. I then show that the motivation for modeling neural networks as using ontologies in the first place is flawed, and briefly two alternative views of how neural networks might abstract their environment. I finish with a brief note on the gooder regulator theorem where I explain why it isn’t particularly useful.
Deconfusing “ontology” in AI alignment
Epistemic status: quite uncertain, feel like the idea that ontologies are inadequate is a hundred-dollar bill on the floor of Grand Central Terminal. Also fairly sure that I’m constructing a strawman but probably good for someone to be pedantically skeptical about ontologies
Ontology within the parlance of AI alignment is a term which formalizes the set of objects, categories, actions, etc., over which minds reason. The idea presupposes a theory of mind (indirect realism) in which rather than directly interfacing with the environment around us, the mind instead takes in sensory data and constructs an internal ontology. The actions and predictions of a cognitive system are then functions of the latent objects in the ontology.
Ontologies are a useful concept in agent modeling because an agent’s utility function and beliefs are easily definable over their mental objects. The elicitation of an agent’s ontology is also an ambitious goal of interpretability, since it would provide a clean way to represent a neural network as a decision-theoretic agent. The field of AI alignment particularly deals with the problems of ontology identification and ontology mismatch: ontology identification is the problem of eliciting the ontology from a neural network (or other cognitive system), while ontology mismatch is the problem of translating human concepts and values to the ontology of the neural network.
This post seeks to challenge the usefulness of ontologies in framing the way artificial minds operate, specifically within the AI alignment literature. It seems like a lot of researchers are working on ontology identification but I haven’t seen any posts that make a compelling case for why we should expect ontologies to emerge in the first place. If you’re unfamiliar with ontologies, it may behoove you to skim a couple posts under the Ontology tag on Lesswrong before continuing.
Defining ontologies
In this section, I present a (hopefully impartial) natural-language description of ontologies based on a couple definitions across Lesswrong. It’s difficult to create an adequate formalism for ontologies, in large part because they are intuited through our own conscious experience rather than based on any empirical data. This is the definition upon which I establish a mathematical formalism after explicating:
This definition also hints at the idea that each ontology has a corresponding reasoner that uses the formal language. The reasoner is where the beliefs and values of the system live, and it generates actions based on perceptual input. I’ll use the term biphasic cognition to refer to the theory of mind in which cognition can be represented as an abstraction phase, where input data is expressed in the ontology, followed by a reasoning phase, where the data in ontology-space is synced with beliefs and values concerning the environment and then an action is selected. It seems to me like biphasic cognition is the implicit context in which researchers are using the word “ontology,” but I’ve never seen it explicitly defined anywhere so I’m not certain.
With a more formal framing, we can define the ontology of a cognitive system F as a function O:input space→latent space such there exists a reasoning function R:latent space→output space such that R satisfies the “reasoner conditions” and F(x)=R(O(x)). There are two interpretations of this, a looser one and a stricter one:
Any cognitive system F can merely be represented as R(O(x)), meaning the computational path of F does not necessarily look like the composition of O and R.
Any computable cognitive system F can be broken down as (f1∘f2∘…∘fn)(x) where each fi is a basic arithmetic operation. Let O be an ontology of F iff O(x)=(f1∘…∘fk)(x) for some k, and R(z)=(fk+1∘…∘fn)(z).
This distinction is important because (2) would imply the ontology is already present in the program, and we just need to find the cutoff k. Definition 1 encapsulates a greater number of systems than Definition 2. For Definition 2 especially, this picture is intuitively how I think about biphasic cognition:
For a function to be a reasoner, it must be isomorphic to some process that looks like symbolic reasoning, like a Bayesian network or Markov logic network. By isomorphic I mean that there exists a bijective mapping between the parameters of the reasoner and the parameters of a corresponding symbolic reasoning function with identical output. I don’t think there are any formal demands to place on ontologies since it just transmutes the input information. Surely some ontologies are better than others, in that they more efficiently encode information or keep more relevant information, but those are demands on quality, not ontology-ness.
Both Definition 1 and Definition 2 square pretty well with how we think about biphasic cognition in humans, since the idea of ontologies in the first place is based on our conscious thought. Definition 2 is a bit harder to match up, since we don’t have the source code for our brains, but from our own conscious experience we infer that the intermediate step between O and R actually seems to occur (as in, not only can humans be represented as R(O(X)), but that it seems like this is the actual “computational” path our brains follow).
Reasons why biphasic cognition may be incorrect
Assuming that my framing of biphasic cognition and ontologies represents how others view the subject, here are some reasons why it might be incorrect or incomplete, specifically when applied to modern neural networks.
Biphasic cognition might already be an incomplete theory of mind for humans
Biphasic cognition is just a model of cognition, and not one that provides much predictive power. The motivation is based on how we think our minds operate from the inside. Neuroscientists don’t seem to really know where or how conscious thought occurs in the first place, and some philosophers think that consciousness is entirely an illusion. This is mostly to say that the existence of doubts around the realness of conscious/symbolic thought in humans should entail even larger a priori doubts about the emergence of conscious/symbolic thought in artificial minds.
This is especially important because some approaches to alignment assume that our own ontology and reasoning can be elicited and formally represented. In the ELK report, for example, much of the approach to eliciting knowledge involves translating thoughts from the Bayes net of an AI system to the Bayes net of a human. If the framing of biphasic cognition which we’ve pieced together from our conscious experience does not actually reflect what is happening mechanistically, then we’re in a pickle.
Biphasic cognition lacks empirical evidence for generalization
Even if biphasic cognition is a good model for human cognition, there is little empirical evidence thus far to suggest that the framing translates cleanly to other minds. And most critically, the theory itself lacks predictive power, making it susceptible to methodological traps: one is reminded of how the theory of phlogiston was dominant among chemists in the 18th century. Phlogiston arose from natural philosophers trying to unite the Aristotelian element of fire with the budding principles of chemistry. Because the theory was still being developed, observations were explained by augmenting the properties of phlogiston. These claims were not challenged because it was assumed phlogiston existed and just needed its properties to be formalized. Chemists didn’t realize that they were viewing combustion through an inadequate paradigm.
Within reinforcement learning, some architectures are built with world-modeling in mind, either via directly constructing a POMDP or by separately training a world model in an unsupervised/self-supervised fashion and then training an agent to interact with the representation created by the world model (here and here). The existence and success of these architectures do not contradict my main point: that clear-cut biphasic cognition probably won’t emerge naturally. Furthermore, these architectures are not robust against the many-ontologies framing I detail below.
Speculation
No ontologies, just floating abstractions
Depending on how stringent our requirements for a function to be a “reasoner” are, we may end up unable to represent neural networks with biphasic cognition. In this case, abstractions exist as subfilters of information spaced along the entire filter that is the neural network rather than localizing as a single ontology.
Below is a diagram of increasingly-detailed feature maps in a CNN (note: this widely-circulated image probably wasn’t sourced from a real CNN). I think that while it’s tempting to think that the feature assembly stops after the high-level features are constructed and that any subsequent computation is more similar to logical inference, it’s more likely that the “reasoning” occurs via the same mode of thought as the feature assembly if the subsequent computation is still occurring within convolutional layers. By this I mean that we picture our conscious selves as interfacing with the data at this point, but with neural networks it seems like the data just keeps getting filtered and abstracted through weird functions of latent variables.
Many ontologies
On the other hand, we might end up with very strong representation techniques, to the point where we’re able to decompose neural networks into ontologies and reasoners. A potential issue in this case is that there exist multiple ontology-reasoner decompositions of a neural network. In Definition 1 of biphasic cognition stated above, this would look like the existence of multiple (O,R) pairs where the outputs of the ontologies are not isomorphic to each other, while for Definition 2, the existence of multiple points k in the computational path about which the ontology and reasoner can be delineated.
I’ll note that this possibility doesn’t dismiss the existence of ontologies, but it does run contrary to the typical framing of ontologies, and wasn’t addressed in any of the papers and Lesswrong posts I read while doing research for this post. My thoughts on multiple ontologies are outside of the scope of this post but I hope to investigate later. One concern I do have, though, is that if the math ends up too strong we could just represent O and R for any k, which would call into question the usefulness of the ontology framing–at that point, it almost reverts to the floating abstraction picture.
This issue might also just be a problem with the formalism I’ve laid out, but this is all I have to say on the topic right now.
A note on the gooder regulator theorem
One result that I came across while searching for representation theorems concerning ontologies was the gooder regulator theorem, which shows that an optimal “regulator” (synonymous with “agent” or “cognitive system” in this context) will, given some conditions, reconstruct a world model from its training data that is isomorphic to the Bayesian posterior of what the world should look like given the input data. A stronger version of this statement would invalidate most of the points I make in this post, but I think that the theorem is actually wholly inapplicable to real AI systems.
In John Wentworth’s original post, he provides us with the following setup: imagine you have a regulator consisting of a model function M and an output function R, which acts within system S to optimize some target Z. The regulator is first given X, training data containing only the set of variables it can observe from the system. It can only store information about X in the model function M. Then, the regulator is provided with “test” data Y along with an optimization problem (“game”) to solve for each item in Y.
The idea is that if M minimizes information retained about X and also performs optimally on tasks and data in Y, then the output of M is isomorphic to the Bayesian posterior distribution of the state of S given the input. Thus, the optimal regulator literally reconstructs an optimal model of the system state given a noisy (or noiseless) input. John’s proof can be found here.
While the math for the theorem is sound, its notion of regulators generally does not line up with neural networks. First, the learned parameters are limited to M, meaning that R can’t retain any information about X. This means that the parts of a neural network optimized via backpropagation (i.e. all of it, usually) are under the umbrella of M, so this theorem doesn’t show anything about optimal world models arising inside of neural networks, only that the output will be optimal if used as the encoder in some larger system. Additionally, the regulator is forced to optimize over an arbitrarily large set of games, which means that it can’t afford to discard any information in X, so the model must be a lossless compression of X. Real neural networks are not trained across such games, however, so the system’s posterior distribution given X will not be selected for, even in models which are optimal over their test data.
Recap/TL;DR
In this post, I present a natural-language definition of ontologies and use the definition to construct two mathematical formalisms of ontologies that provide a specific picture of what an ontology in a neural network could look like. I then show that the motivation for modeling neural networks as using ontologies in the first place is flawed, and briefly two alternative views of how neural networks might abstract their environment. I finish with a brief note on the gooder regulator theorem where I explain why it isn’t particularly useful.