Ontology is a branch of philosophy. It is concerned with questions including how can objects be grouped into categories?
An ontology is an answer to that question. An ontology is a collection of sets of objects (or: a collection of sets of points in thingspace).[1] An agent’s ontology determines the abstractions it makes.
For example, consider Alice and Bob, two normal English-fluent humans. “Chairs”_Alice is in Alice’s ontology; it is (or points to) a set of (possible-)objects (namely what she considers chairs) that she bundles together. “Chairs”_Bob is in Bob’s ontology, and it is a very similar set of objects (what he considers chairs). This overlap makes it easy for Alice to communicate with Bob and predict how he will make sense of the world.
(Also necessary for easy-communication-and-prediction is that their ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if they each saw a few chairs they would make very similar abstractions, namely to “chairs”_Alice and “chairs”_Bob.)
Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn’t natural (i.e. learned by default) and moreover is hard to teach an AI, then that AI won’t think in terms of the same concepts as humans, which might be bad. See Ontology identification problem. Also misgeneralization of concepts or goals.
- ^
“Objects” means possible objects, not objects that really exist.
An ontology can also include an account of other kinds-of-stuff, like properties, relations, events, substances, and states-of-affairs.