The Cluster Structure of Thingspace
The notion of a “configuration space” is a way of translating object descriptions into object positions. It may seem like blue is “closer” to blue-green than to red, but how much closer? It’s hard to answer that question by just staring at the colors. But it helps to know that the (proportional) color coordinates in RGB are 0:0:5, 0:3:2 and 5:0:0. It would be even clearer if plotted on a 3D graph.
In the same way, you can see a robin as a robin—brown tail, red breast, standard robin shape, maximum flying speed when unladen, its species-typical DNA and individual alleles. Or you could see a robin as a single point in a configuration space whose dimensions described everything we knew, or could know, about the robin.
A robin is bigger than a virus, and smaller than an aircraft carrier—that might be the “volume” dimension. Likewise a robin weighs more than a hydrogen atom, and less than a galaxy; that might be the “mass” dimension. Different robins will have strong correlations between “volume” and “mass”, so the robin-points will be lined up in a fairly linear string, in those two dimensions—but the correlation won’t be exact, so we do need two separate dimensions.
This is the benefit of viewing robins as points in space: You couldn’t see the linear lineup as easily if you were just imagining the robins as cute little wing-flapping creatures.
A robin’s DNA is a highly multidimensional variable, but you can still think of it as part of a robin’s location in thingspace—millions of quaternary coordinates, one coordinate for each DNA base—or maybe a more sophisticated view that . The shape of the robin, and its color (surface reflectance), you can likewise think of as part of the robin’s position in thingspace, even though they aren’t single dimensions.
Just like the coordinate point 0:0:5 contains the same information as the actual HTML color blue, we shouldn’t actually lose information when we see robins as points in space. We believe the same statement about the robin’s mass whether we visualize a robin balancing the scales opposite a 0.07-kilogram weight, or a robin-point with a mass-coordinate of +70.
We can even imagine a configuration space with one or more dimensions for every distinct characteristic of an object, so that the position of an object’s point in this space corresponds to all the information in the real object itself. Rather redundantly represented, too—dimensions would include the mass, the volume, and the density.
If you think that’s extravagant, quantum physicists use an infinite-dimensional configuration space, and a single point in that space describes the location of every particle in the universe. So we’re actually being comparatively conservative in our visualization of thingspace—a point in thingspace describes just one object, not the entire universe.
If we’re not sure of the robin’s exact mass and volume, then we can think of a little cloud in thingspace, a volume of uncertainty, within which the robin might be. The density of the cloud is the density of our belief that the robin has that particular mass and volume. If you’re more sure of the robin’s density than of its mass and volume, your probability-cloud will be highly concentrated in the density dimension, and concentrated around a slanting line in the subspace of mass/volume. (Indeed, the cloud here is actually a surface, because of the relation VD = M.)
“Radial categories” are how cognitive psychologists describe the non-Aristotelian boundaries of words. The central “mother” conceives her child, gives birth to it, and supports it. Is an egg donor who never sees her child a mother? She is the “genetic mother”. What about a woman who is implanted with a foreign embryo and bears it to term? She is a “surrogate mother”. And the woman who raises a child that isn’t hers genetically? Why, she’s an “adoptive mother”. The Aristotelian syllogism would run, “Humans have ten fingers, Fred has nine fingers, therefore Fred is not a human” but the way we actually think is “Humans have ten fingers, Fred is a human, therefore Fred is a ‘nine-fingered human’.”
We can think about the radial-ness of categories in intensional terms, as described above—properties that are usually present, but optionally absent. If we thought about the intension of the word “mother”, it might be like a distributed glow in thingspace, a glow whose intensity matches the degree to which that volume of thingspace matches the category “mother”. The glow is concentrated in the center of genetics and birth and child-raising; the volume of egg donors would also glow, but less brightly.
Or we can think about the radial-ness of categories extensionally. Suppose we mapped all the birds in the world into thingspace, using a distance metric that corresponds as well as possible to perceived similarity in humans: A robin is more similar to another robin, than either is similar to a pigeon, but robins and pigeons are all more similar to each other than either is to a penguin, etcetera.
Then the center of all birdness would be densely populated by many neighboring tight clusters, robins and sparrows and canaries and pigeons and many other species. Eagles and falcons and other large predatory birds would occupy a nearby cluster. Penguins would be in a more distant cluster, and likewise chickens and ostriches.
The result might look, indeed, something like an astronomical cluster: many galaxies orbiting the center, and a few outliers.
Or we could think simultaneously about both the intension of the cognitive category “bird”, and its extension in real-world birds: The central clusters of robins and sparrows glowing brightly with highly typical birdness; satellite clusters of ostriches and penguins glowing more dimly with atypical birdness, and Abraham Lincoln a few megaparsecs away and glowing not at all.
I prefer that last visualization—the glowing points—because as I see it, the structure of the cognitive intension followed from the extensional cluster structure. First came the structure-in-the-world, the empirical distribution of birds over thingspace; then, by observing it, we formed a category whose intensional glow roughly overlays this structure.
This gives us yet another view of why words are not Aristotelian classes: the empirical clustered structure of the real universe is not so crystalline. A natural cluster, a group of things highly similar to each other, may have no set of necessary and sufficient properties—no set of characteristics that all group members have, and no non-members have.
But even if a category is irrecoverably blurry and bumpy, there’s no need to panic. I would not object if someone said that birds are “feathered flying things”. But penguins don’t fly!—well, fine. The usual rule has an exception; it’s not the end of the world. Definitions can’t be expected to exactly match the empirical structure of thingspace in any event, because the map is smaller and much less complicated than the territory. The point of the definition “feathered flying things” is to lead the listener to the bird cluster, not to give a total description of every existing bird down to the molecular level.
When you draw a boundary around a group of extensional points empirically clustered in thingspace, you may find at least one exception to every simple intensional rule you can invent.
But if a definition works well enough in practice to point out the intended empirical cluster, objecting to it may justly be called “nitpicking”.
- Diseased thinking: dissolving questions about disease by 30 May 2010 21:16 UTC; 369 points) (
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 152 points) (
- Comment on “Endogenous Epistemic Factionalization” by 20 May 2020 18:04 UTC; 128 points) (
- EA considerations regarding increasing political polarization by 19 Jun 2020 8:25 UTC; 98 points) (EA Forum;
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 97 points) (
- The First Sample Gives the Most Information by 24 Dec 2020 20:39 UTC; 81 points) (
- How to pick your categories by 11 Nov 2010 15:13 UTC; 78 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 77 points) (
- Numeracy neglect—A personal postmortem by 27 Sep 2020 15:12 UTC; 76 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 74 points) (
- SotW: Be Specific by 3 Apr 2012 6:11 UTC; 66 points) (
- Arguing “By Definition” by 20 Feb 2008 23:37 UTC; 65 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 60 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 58 points) (
- Where to Draw the Boundary? by 21 Feb 2008 19:14 UTC; 52 points) (
- Selling Nonapples by 13 Nov 2008 20:10 UTC; 46 points) (
- Categorizing Has Consequences by 19 Feb 2008 1:40 UTC; 46 points) (
- Neural Categories by 10 Feb 2008 0:33 UTC; 41 points) (
- Reductive Reference by 3 Apr 2008 1:37 UTC; 37 points) (
- Superexponential Conceptspace, and Simple Words by 24 Feb 2008 23:59 UTC; 36 points) (
- 5 Jan 2011 8:36 UTC; 35 points) 's comment on The Neglected Virtue of Scholarship by (
- Being Foreign and Being Sane by 25 May 2013 0:58 UTC; 35 points) (
- Mutual Information, and Density in Thingspace by 23 Feb 2008 19:14 UTC; 35 points) (
- Unnatural Categories by 24 Aug 2008 1:00 UTC; 32 points) (
- Why I think the Foundational Research Institute should rethink its approach by 20 Jul 2017 20:46 UTC; 31 points) (EA Forum;
- Classical Configuration Spaces by 15 Apr 2008 8:40 UTC; 30 points) (
- The Nature of Logic by 15 Nov 2008 6:20 UTC; 30 points) (
- No Logical Positivist I by 4 Aug 2008 1:06 UTC; 30 points) (
- The Univariate Fallacy by 15 Jun 2019 21:43 UTC; 28 points) (
- CFAR 2017 Retrospective by 19 Dec 2017 19:38 UTC; 26 points) (
- Fuzzy Boundaries, Real Concepts by 7 May 2018 3:39 UTC; 25 points) (
- Why officers vs. enlisted? by 30 Oct 2013 20:14 UTC; 23 points) (
- How can I strategically write a complex bestseller? (4HS001) by 12 Jun 2013 8:47 UTC; 19 points) (
- Introduction to Connectionist Modelling of Cognitive Processes: a chapter by chapter review by 30 Sep 2012 4:24 UTC; 18 points) (
- 23 Feb 2018 18:23 UTC; 15 points) 's comment on The Intelligent Social Web by (
- 7 May 2009 12:49 UTC; 15 points) 's comment on Hardened Problems Make Brittle Models by (
- 22 Dec 2011 9:49 UTC; 14 points) 's comment on Welcome to Less Wrong! by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part1) by 20 Feb 2013 9:09 UTC; 13 points) (
- 24 Jun 2012 7:10 UTC; 12 points) 's comment on Local Ordinances of Fun by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 10 points) (
- 18 Dec 2020 12:24 UTC; 8 points) 's comment on Ask Rethink Priorities Anything (AMA) by (EA Forum;
- 26 Sep 2012 16:44 UTC; 8 points) 's comment on Diseased thinking: dissolving questions about disease by (
- 15 May 2017 22:30 UTC; 8 points) 's comment on Gears in understanding by (
- 13 Mar 2019 3:05 UTC; 8 points) 's comment on Blegg Mode by (
- The Useful Definition of “I” by 28 May 2014 11:44 UTC; 8 points) (
- 12 Nov 2011 1:43 UTC; 8 points) 's comment on Transhumanism and Gender Relations by (
- [Link] Selfhood bias by 16 Jan 2013 16:05 UTC; 7 points) (
- [SEQ RERUN] The Cluster Structure of Thingspace by 12 Jan 2012 8:37 UTC; 7 points) (
- Communicating concepts in value learning by 14 Dec 2015 3:06 UTC; 7 points) (
- 24 Jul 2013 12:40 UTC; 7 points) 's comment on Rationality Quotes July 2013 by (
- 8 Sep 2011 0:24 UTC; 6 points) 's comment on Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound by (
- 9 Dec 2016 13:57 UTC; 5 points) 's comment on A Return to Discussion by (
- Learning Normativity: Language by 5 Feb 2021 22:26 UTC; 5 points) (
- 16 Nov 2011 15:29 UTC; 5 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 14 Apr 2015 8:16 UTC; 5 points) 's comment on Open Thread, Apr. 13 - Apr. 19, 2015 by (
- 13 Apr 2011 3:04 UTC; 5 points) 's comment on Language, intelligence, rationality by (
- 16 Jun 2018 22:45 UTC; 4 points) 's comment on Worrying about the Vase: Whitelisting by (
- 11 Jun 2020 3:53 UTC; 4 points) 's comment on Public Static: What is Abstraction? by (
- 22 Jan 2011 20:19 UTC; 3 points) 's comment on I Want to Learn About Education by (
- 9 Nov 2011 2:13 UTC; 3 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 25 Mar 2018 16:04 UTC; 3 points) 's comment on How to talk rationally about cults by (
- 28 Dec 2012 20:55 UTC; 3 points) 's comment on Beware Selective Nihilism by (
- 15 Nov 2012 16:20 UTC; 3 points) 's comment on Rationality Quotes November 2012 by (
- 24 May 2011 4:58 UTC; 2 points) 's comment on Conceptual Analysis and Moral Theory by (
- 15 Mar 2019 7:42 UTC; 2 points) 's comment on Blegg Mode by (
- 20 Jun 2011 22:00 UTC; 2 points) 's comment on Why No Wireheading? by (
- 6 Mar 2014 11:57 UTC; 2 points) 's comment on Open Thread: March 4 − 10 by (
- 21 May 2020 1:20 UTC; 2 points) 's comment on A Problem With Patternism by (
- 20 Feb 2012 20:31 UTC; 2 points) 's comment on Evaluating Multiple Metrics (where not all are required) by (
- 3 Sep 2011 20:53 UTC; 1 point) 's comment on Another treatment of Direct Instruction getting more into the technical details of the theory by (
- 19 Jan 2020 2:51 UTC; 1 point) 's comment on Risk and uncertainty: A false dichotomy? by (
- 19 Nov 2009 1:11 UTC; 1 point) 's comment on A Less Wrong singularity article? by (
- 4 Jul 2009 2:31 UTC; 1 point) 's comment on Atheism = Untheism + Antitheism by (
- 8 Jul 2015 0:23 UTC; 1 point) 's comment on Open Thread, Jul. 6 - Jul. 12, 2015 by (
- 27 May 2015 17:06 UTC; 1 point) 's comment on Open Thread, May 25 - May 31, 2015 by (
- 20 Jan 2011 9:55 UTC; 1 point) 's comment on Theists are wrong; is theism? by (
- 4 Aug 2011 9:08 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 18 Jun 2015 14:10 UTC; 1 point) 's comment on Autism, or early isolation? by (
- 5 May 2015 11:54 UTC; 1 point) 's comment on Open Thread, May 4 - May 10, 2015 by (
- 30 Jan 2011 0:30 UTC; 0 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 31 May 2015 19:54 UTC; 0 points) 's comment on Understanding Who You Really Are by (
- 24 Oct 2011 23:15 UTC; 0 points) 's comment on The Sciencearchist Manifesto by (
- 10 Feb 2017 21:18 UTC; 0 points) 's comment on The “I Already Get It” Slide by (
- 16 Aug 2013 15:40 UTC; 0 points) 's comment on What Bayesianism taught me by (
- 8 Mar 2017 7:22 UTC; 0 points) 's comment on Am I Really an X? by (
- 12 Feb 2011 22:46 UTC; 0 points) 's comment on Subjective Relativity, Time Dilation and Divergence by (
- 29 Mar 2020 6:39 UTC; 0 points) 's comment on Can crimes be discussed literally? by (
- 3 Jan 2011 4:38 UTC; 0 points) 's comment on In Russian we have the word ‘Mirozdanie’, which means all that exists by (
- 4 Oct 2012 3:24 UTC; 0 points) 's comment on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? by (
- 1 Jun 2013 3:23 UTC; -2 points) 's comment on [LINK] Bets do not (necessarily) reveal beliefs by (
- 8 Sep 2009 16:23 UTC; -2 points) 's comment on The Featherless Biped by (
- 22 Sep 2012 3:11 UTC; -3 points) 's comment on Any existential risk angles to the US presidential election? by (
- CEV Sequence—On What is a Self—Part 1 by 10 Jan 2012 18:13 UTC; -12 points) (
But if a definition works well enough in practice to point out the intended empirical cluster, objecting to it may justly be called “nitpicking”.
You should probably put in a disclaimer excepting mathematics from this—assuming that you agree it should be excepted. (That is, assuming you agree that “Aristotelian” precision—what mathematicians call “rigor”—is appropriate in mathematics.)
“Definition” has a different definition in math.
Mathematics is largely already excepted from the above discussion—this post is talking about empirical clusters only (“When you draw a boundary around a group of extensional points empirically clustered in thingspace”), and mathematics largely operates in a priori truths derived from axioms. For example, no one needs to do a study of triangles to see whether their angle all do, indeed, add up to 180 degrees—when that’s not part of the definition of triangles, it follows from the other definitions and axioms.
What’s interesting about “Thingspace” (I sometimes call it “orderspace”) is that it flattens out all the different combinations of properties into a mutually exclusive space of points. An observable “thing” in the universe can’t be classified in two different points in Thingspace. Yes you can have a range in Thingspace representing your uncertainty about the classification (If you’re a mere mortal you always have this error bar) but the piece-of-universe-order you are trying to classify is in ideal terms only one point in the space.
IMO this could explain the way we deal with causality. Why do we say effects have only one cause? Where does the Principle of Sufficient Reason come from? The universe is not actually quantized in pieces that have isolated effects on each other. However, causes and effects are “things”, they are points in Thingspace and as “things” they actually represent aggregates, bunches of variable values that when recognized as a whole have, by definition, unique cause-effect relationships with other “things”. I see causality as arrows from one area of thing space to another. Some have tried to account for causality with complex Bayesian networks based on graph theory that are hard to compute. But I think applying causality to labeled clusters in Thingspace instead of trying to apply it to entangled real values seems simpler and more accurate. And you can do it at different levels of granularity to account for uncertainty. The space is then most useful classified hierarchically into an ontology. Uncertainty about classification is then represented by using bigger, vaguer, all encompassing clusters or “categories” in the Thingspace and high level of certainty is represented by a specified small area.
I once tried (and pretty much failed) to create a novel machine learning algorithm based on a causality model between hierarchical EM clusters. I’m not sure why it failed. It was simple and beautiful but I had to use greedy approaches to reduce complexity which might have broken my EM-algorithm. Well at least it (just barely) got me a masters degree. I still believe in my approach and I hope someone will figure it out some day. I’ve been reading and questioning the assumptions underlying all of this lately and specially pondering the link between the physical universe and probability theory and I got stuck at the problem of the arrow of time which seems to be the unifying principle but which also seems not that well understood. A well… maybe in another life.
Why would more uncertainty = bigger cluster? Wouldn’t uncertainty be expressed by using smaller clusters? I.e. if you’re uncertain about a cluster you fall-back on a smaller subset of things that you are more certain pertain to that classification?
If we find a category that has a very tight cluster, such that for that category it’s reasonably straightforward to define that cluster, and only a tiny handful of distant outliers that seem to only shakily fit with the rest of the category, than it may be wise in some cases to conciously redefine that category in terms of the explicit definition that represents the tight cluster, and maybe use a different category, or a broader one, to represent or include those outliers.
Psy-Kosh, dangerous heuristic. Isn’t that how the Nazis thought of the Jews? We should look first and foremost at ways things fit into clusters, not ways they don’t—otherwise nine-fingered Fred gets ruled out of being human at an early hurdle. I’m sure you’ll agree Fred fits better into ‘human’ than ‘broad general-human-type’, despite his missing digit.
Ostriches are a long way from that tight, feathery birdy cluster, but we leave them out of ‘general bird-ness’ at our peril. Mr Ostrich scores 84% on birdiness, not 16% on not-birdiness. (He also scores in the high 60s in dinosauriness, but that’s another matter.)
I sense these 6 essays on cognitive semantics are going to bring us back to transhumanism sooner or later. As of right now, whatever the radial distance from the prototype, and except on the Island of Dr Moreau, you are DEFINITELY human or definitely not, definitely a bird or definitely not. Pluto is DEFINITELY a pla...… whoops.
or maybe a more sophisticated view that .
?
What are the dimensions of thingspace?
Are “number of sides”, “IQ”, “age”, and “font” all dimensions?
And what are the points in thingspace? It sounds like they include anything that is somewhat “mother” and anything that is somewhat “robin”. (And I should think thingspace is a point in thingspace too.)
I think this post makes some good points, the main one, for me, being that words are centers of (indefinitely extending) clusters rather than boundaries of sets. But I think the notion of thingspace rests on shaky foundations: it assumes the world is broken down into things and those things have attributes.
We don’t all share the same thingspace do we?
I think thingspace is meant to be an abstraction. It’s just a map to help us think about categorisation of objects.
Thingspace seems rather like cladistics, in which you come up with groups of characteristics and then work out trees of evolutionary descent. Note that this originated in studying the evolution of life on Earth and piecing together the Tree of Life, but is applicable anywhere an evolutionary process can work, e.g. linguistic evolution. Without necessarily going as far as the actual sorting stuff into trees, cladistics may be useful in helping conceptualise thingspace and distance in thingspace.
A thought I recently had: Shouldn’t we be interested in “anti-clusters” too? ie, regions of comparatively low density compared to the surroundings/Patterns of stuff that tends to conspicuously fail to happen compared to what would be otherwise expected.
This essay reminds me of Samuel Delany saying that the word “the” seems like a gray ellipse to him, and each adjective modifies the ellipse.
does thingspace remain static? that is; would definitional/structural changes within the space correspond to a folding or reorienting of the space where the clusters become reorganized?
You could give relatively simple verbal intensional definitions to try and lead someone to the bird cluster, yes. But if you had someone who wasn’t practically accessible through those verbal communications, how would you do it?
You’d have to show extensional examples, positives and negatives, and indicate the value of each example by some clear and consistent signal.
You couldn’t give all possible extensional examples, so you would have to select some. And you couldn’t give them all at once, so you’d have to present them in a particular order.
What is the theory for finding optimized selections and orderings of examples for leading the learner to the cluster? How does that theory extend to the more complicated case where you have to communicate the subtypes within the “bird” cluster?
This is one of the many things that the Theory of Direct Instruction that’s presented in Engelmann and Carnine’s text Theory of Instruction: Principles and Applications addresses. [They call it a “multi-dimensional non-comparative concept” (non-comparative” meaning the value of any example is absolute rather than relative to the last), or “noun” for short.]
And of course, if you had to select and order the presentation of simple verbal definitions/descriptions as examples themselves, the theory would also have application.
Please see here for a clarification of what “someone who wasn’t practically accessible through those verbal communications” means, and a more concrete example of teaching the higher-order class ‘vehicles’ and sub-classes.
Hi there, fairly new here to LW. I’m reading through the sequences in order. went through map and territory and mysterious answers to mysterious questions. Now going through this 37 ways words can be wrong sequence, as its recommended before i delve into reductionism.
Its been said several times that LW tries to cater to a broad audience, but i find myself lost here. I have not extensively studied physics, only having done 1 year of engineering so far, and the physics references here are pretty much unintelligible to me. I don’t know what configuration space is, or quaternary coordinates, or thingspace, or what strings are being referred to. I find myself struggling to grasp this post.
EDIT: I’ve read through this a few times. I still have almost no idea on most of the math, but I’m guessing the “moral” of this post is basically “don’t become overly obsessed with definitions”?
Reading Eliezers quantum physics sequence should help with configuration spaces and thingspaces, probably some other physics references aswell.
It’s not important to your central claim, but this is the strawmanniest thing since Straw Man came to Straw Town.
Um. That...?
I guess there was a misformatted link in there or something?
One small (hopefully not too obvious) addition: the cluster-nature of thing-space is dependent on the distance function, and there is no single obviously corrent one. Is a penguin more like an eagle or a salmon? Depends on what you mean by “more like”. It’s perfectly reasonable to say “right now, the most useful concept of ‘more like’ is ‘last common ancestor’ so penguins are more like eagles and ‘birds’ is a cluster’ and then as your needs change to say “right now, the most useful concept of ‘more like’ is similarity of habitat so penguins are more like salmon and ‘sealife’ is a cluster.”
why yes
clusters can overlap, and the word “more like” uses different clusters of clusters depending on context
Before reading this article, I had already been using this visualization technique to think of probability densities. I wonder how common that is? Probably happened because of exposure to statistics.
What I actually thought reading this was: “Frodo is a nine-fingered Hobbit”...