Over the years I’ve picked up on more and more phrases that people on LessWrong use. However, “ontology” is one of them that I can’t seem to figure out. It seems super abstract and doesn’t seem to have a reference post.
So then, please ELI5: what is ontology?
I’ll give you an example of an ontology in a different field (linguistics) and maybe it will help.
This is WordNet, an ontology of the English language. If you type “book” and keep clicking “S:” and then “direct hypernym”, you will learn that book’s place in the hierarchy is as follows:
… > object > whole/unit > artifact > creation > product > work > publication > book
So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an “ontology”, I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.
Now let’s go and look at one of those posts.
https://arbital.com/p/ontology_identification/#h-5c-2.1 , “Ontology identification problem”:
My “tree of words” understanding: we classify things into “human minds” or “not human minds”, but now that we know more about possible minds, we don’t want to use this classification anymore. Boom, we have more concepts now and the borders don’t even match. We have a different ontology.
From the same post:
My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a “diamond” should be “any carbon” or should be refined to “only carbon-12”.
Let’s take a few more posts:
My understanding: You thought only [particular things] were bets so you said “I won’t take bets”. I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.
My understanding: AI and humans have different sets of categories. AI can’t understand what you want it to do if your categories are different. Like, maybe you have “creative work” in your ontology, and this subcategory belongs to the category of “creations by human-like minds”. You tell the AI that you want to maximize the number of creative works and it starts planting trees. “Tree is not a creative work” is not an objective fact about a tree; it’s a property of your ontology; sorry. (Trees are pretty cool.)
Also, to answer your question about “probability” in a sister chain: yes, “probability” can be in someone’s ontology. Things don’t have to “exist” to be in an ontology.
Here’s another real-world example:
You are playing a game. Maybe you’ll get a heart, maybe you won’t. The concept of probability exists for you.
This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they’ll get a heart, on frame 4581 they won’t, so they purposefully waste a frame to get a heart (for instance). “Probability” is not a thing that exists for them — for them the universe of the game is fully deterministic.
The person’s ontology is “right” and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn’t be. You don’t even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let’s do a 2x2 matrix for all combinations of, let’s say, “probability” and “luck” in one’s personal ontology:
Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
Person D: probability exists, luck doesn’t. (“You” are person D here.)
Person E: luck exists, probability doesn’t. If you didn’t get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren’t. An incredibly lucky person could well get a hundred hearts in a row.
Person F: probability and luck both don’t exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as “fake concepts”, is useless because actually everything is useless. (Some kind of fatalism.)
//
Now imagine somebody who replies to this comment saying “you could rephrase this in terms of beliefs”. This would be an example of a person saying essentially “hey, you should’ve used [my preferred ontology] instead of yours”, one where you use the concept of “belief” instead of “ontology”. Which is fine!
I’ll also give you two examples of using ontologies — as in “collections of things and relationships between things” — for real-world tasks that are much dumber than AI.
ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into “meaning trees” and renderers from meaning trees into natural languages. The project was called “Compreno”. If it worked, it would’ve given them a “perfect” translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there’s still nothing.
Let’s say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say “organic” on your packaging. For each country, you need to determine if your cereal would be considered “organic”. This also means that you need to know for all of your cereal’s ingredients whether they are “organic” by each country’s definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don’t have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc.
An ontology is a collection of sets of objects and properties (or maybe: a collection of sets of points in thingspace). An agent’s ontology determines the abstractions it makes.
For example, “chairs”_Zach is in my ontology; it is (or points to) a set of (possible-)objects (namely what I consider chairs) that I bundle together. “Chairs”_Adam is in your ontology, and it is a very similar set of objects (what you consider chairs). This overlap makes it easy for me to communicate with you and predict how you will make sense of the world.
(Also necessary for easy-communication-and-prediction is that our ontologies are pretty sparse, rather than full of astronomically many overlapping sets. So if we each saw a few chairs we would make very similar abstractions, namely to “chairs”_Zach and “chairs”_Adam.)
(Why care? Most humans seem to have similar ontologies, but AI systems might have very different ontologies, which could cause surprising behavior. E.g. the panda-gibbon thing. Roughly, if the shared-human-ontology isn’t natural [i.e. learned by default] and moreover is hard to teach an AI, then that AI won’t think in terms of the same concepts as we do, which might be bad.)
[Note: substantially edited after Charlie expressed agreement.]
Just to paste my answer below yours since I agree:
There’s “ontology” and there’s “an ontology.”
Ontology with no “an” is the study of what exists. It’s a genre of philosophy questions. However, around here we don’t really worry about it too much.
What you’ll often see on LW is “an ontology,” or “my ontology” or “the ontology used by this model.” In this usage, an ontology is a set of building blocks used in a model of the world. It’s the foundational stuff that other stuff is made out of or described in terms of.
E.g. minecraft has “an ontology,” which is the basic set of blocks (and their internal states if applicable), plus a 3-D grid model of space.
Hm, I think I see. Thanks. But what about abstract things? Things that never boil down to the physical. Like “probability”. Would the concept of probability be something that would belong to someone’s ontology?
It could be! People don’t use the same model of the world all the time. E.g. when talking about my living room I might treat a “chair” as a basic object, even though I could also talk about the atoms making up the chair if prompted to think differently.
When talking about math, people readily reason using ontologies where mathematical objects are the basic building blocks. E.g. “four is next to five.” But if talking about tables and chairs, statements like “this chair has four legs” don’t need to use “four” as part of the ontology, the “four-ness” is just describing a pattern in the actual ontologically basic stuff (chair-legs).
I also agree. I was going to write a similar answer. I’ll just add my nuance as a comment to Zach’s answer.
I said a bunch about ontologies in my post on fake frameworks. There I give examples and I define reductionism in terms of comparing ontologies. The upshot is what I read Zach emphasizing here: an ontology is a collection of things you consider “real” together with some rules for how to combine them into a coherent thingie (a map, though it often won’t feel on the inside like a map).
Maybe the purest example type is an axiomatic system. The undefined terms are ontological primitives, and the axioms are the rules for combining them. We usually combine an axiomatic system with a model to create a sense of being in a space. The classic example of this sort being Euclidean geometry.
But in practice most folk use much more fuzzy and informal ontologies, and often switch between seemingly incompatible ones as needed. Your paycheck, the government, cancer, and a sandwich are all “real” in lots of folks’ worldview, but they don’t always clearly relate the kinds of “real” because how they relate doesn’t usually matter.
I think ontologies are closely related to frames. I wonder if frames are just a special kind of ontology, or maybe the term we give for a particular use of ontologies. Mentioning this in case frames feel more intuitive than ontologies do.
(I agree. I think frames and ontologies are closely related; in particular, ontologies are comprehensive while frames just tell you what to focus on, without needing to give an account of everything.)
ELI5: Ontology is what you think the world is, epistemology is how you think about it.
Epistemic status: shaky. Offered because a quick answer is often better than a completely reliable one.
An ontology is a comprehensive account of reality.
The field of AI uses the term to refer to the “binding” of the AI’s map of reality to the territory. If the AI for example ends up believing that the internet is reality and all this talk of physics and galaxies and such is just a conversational ploy for one faction on the internet to gain status relative to another faction, the AI has an ontological failure.
ADDED. A more realistic example would be the AI’s confusing its internal representation of the thing to be optimized with the thing the programmers hoped the AI would optimize. Maybe I’m not the right person to answer because it is extremely unlikely I’d ever use the word ontology in a conversation about AI.
So epistemic means: confidence of knowing?
Yes, the “epistemic status” is me telling you how confident I am.
Always confuse this with Deontology ;-)
If Ontology is about “what is?” why is Deontology not “What is not?”
Good question! Although I think it would be appropriate to move it to a comment instead of an answer.