So, my understanding of Chapman—and this is based on other thing’s he’s written which I unfortunately can’t find right now, he can of course correct me if I’m wrong here—is that he’s often just not saying what it sounds like he’s saying, because he’s implicitly prefixing everything with “human-”. The article that I can’t find at the moment that made this clear was where he said, there’s no system to do X, and then said, there’s no system to do X, and then anticipated the counterargument, but the human brain does X, and replied, yes but I’m talking about systems a human could execute, so “the human brain” does X is not relevant to what I’m talking about. But this is the only place he explicitly said that! So I think when reading him you just have to do that everywhere—prefix “human-” to everything (although the exact meaning of that prefix seems to vary). When he says “system”, he actually means “system a human could execute”. When he says “rationality”, he actually means “how people usually construe rationality”. When he seems to confuse systems of facts and systems of norms, that’s not him getting mixed up, it’s that he’s actually talking about other people’s maps—and in other people’s maps these are often conflated—rather than talking about the territory. Now personally I think this sort of terminology obfuscates rather than clarifies—you could just, you know, explicitly mark when you’re talking about human-X rather than X, or when you’re talking about people’s maps rather than the territory directly—but I think you have to understand it if you want to read Chapman’s writing.
In a lot of his articles Chapman uses the word system with the meaning the term has in developmental psychology and particularly in Kegan’s writing. It’s what Kegan labels as level 4. I don’t think “system that a human could execute” is a gloss that would allow someone without any background to distinguish things that are systems in the developmental psychology sense from things that aren’t.
By system, I mean, roughly, a collection of related concepts and rules that can be printed in a book of less than 10kg and followed consciously. A rational system is one that is “good” in some way. There are many different conceptions of what makes a system rational. Logical consistency is one; decision-theoretic criteria can form another. The details don’t matter here, because we are going to take rationality for granted.
the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is probably also an incomprehensibly weird one, which one could not consciously follow accurately. I say “probably” because we don’t know much about how minds work, so we can’t be certain.
What we can be certain is that, because we don’t know how minds work, we can’t treat them as systems now. That is the case even if, when neuroscience progresses sufficiently, they might eventually be described that way. Even if God told us that “a human, reasoning meta-systematically, is just a system,” it would be useless in practice. Since we can’t now write out rules for meta-systematic reasoning in less than ten kilograms, we have to act, for now, as if meta-systematic reasoning is non-systematic.
I can’t agree with that, for a number of reasons. Note that the thing that I’m claiming Chapman does is really a number of things which I’ve summed up as “you have to prepend ‘human-’ to everything”, but the meaning of that prefix I’m summing things up with is actually context dependent. Here’s a few examples of what it can mean (if I’m correct—again, if Chapman himself wants to correct me, great!) and why it’s not a good way of talking.
Sometimes this means talking about… certain human patterns, that a particular notion tends to invoke. E.g. “rationality” above—it does indeed frequently happen that those who go in for “rationality” or similar notions end up falling into the Straw Vulcan pattern. And it’s important to be able to discuss these patterns. But it’s a mistake to conflate the pattern itself with the idea that invokes it—especially as there may be multiple of the latter, that are distinct from one another; this is a lossy operation. Better to say “rationality” when you mean rationality, and say “the pattern invoked by rationality” (or in this case, “Straw Vulcanism”, since we have a name for it in this case) when you mean that. Because otherwise how will you tell apart the different ideas that can invoke the Straw Vulcan pattern?
Like, let’s diagram this. The usual approach is that “rationality” (the word) points to rationality (the concept) which then itself has an arrow (via the “invokes in humans” operator) to Straw Vulcanism. If we take the initial arrow from “rationality” to rationality, and alter it instead to point to Straw Vulcanism, how do we refer to rationality? “Idealized Straw Vulcanism?” I don’t think so! Especially because once again which idealization?
The alternative, I suppose, is that we don’t reroute any arrows, but instead just take it as implicit that we’re always supposed to apply “human-” afterward. And, like, use some sort of quotation thingy (e.g. the “idealized-” prefix) when we want to stop that application (like how we use quote marks to indicate that we are mentioning rather than using a word). But even though we’re using “rationality” to talk about Straw Vulcanism, under this way of talking, we have to keep in mind that rationality doesn’t actually mean Straw Vulcanism (even though that’s what we’re using it to mean!) so that when we say “idealized rationality” we know what that means. This… this does not sound like a good way of handling things. I would recommend having words directly point to the thing they refer to.
Sometimes this means talking about the map rather than the territory. Taking “X” not to mean X but to mean “X”, people’s idea of X.
The problem is that, well, most of the time we want to talk about the territory, not people’s maps. If I say “there were no Kuiper belt in 1700” you should say “that is false”, not “that is true, because the idea of a Kuiper belt had not yet been hypothesized”. If I want to say “there was no concept of a ‘Kuiper belt’ in 1700″, I can say that explicitly. Basically this way of talking is in a sense saying, you can’t actually use words, you can only mention them. But most of the time I do in fact want to use words, not mention them!
And again this ends up with similar problems to above, which I won’t detail in full once again. In this case they seem a bit more handleable because there’s not the lossiness issue—the usual way of speaking is to say X in order to use the word “X” and to say “X” in order to mention the word “X”, but one could notionally come up with some bizarre reverse convention here. (Which to be clear I haven’t seen Chapman use—what he says when he actually wants to use a word rather than mentioning it, I don’t know. “The real, actual Kuiper belt?” IDK.) I still don’t think this is a good idea.
The most defensible one, I think, is where it effectively means “humanly realizable”, like with the “system” example above. This one is substantially less bad than the others, because it’s still a bad idea, it’s at least workable. It’s usably bad rather than unusably bad. But I do still think it’s a bad idea. Once again this is a lossy operation—the distinction betwen “nondeterministic” and “chaotic”, that can both get collapsed to “unpredictable in practice”, is worth preserving. And once again to adopt this systematically would require similar contortions to above, even if not as bad; once again I’ll skip the full argument. But yeah, I don’t think this is a good way of talking.
So, my understanding of Chapman—and this is based on other thing’s he’s written which I unfortunately can’t find right now, he can of course correct me if I’m wrong here—is that he’s often just not saying what it sounds like he’s saying, because he’s implicitly prefixing everything with “human-”. The article that I can’t find at the moment that made this clear was where he said, there’s no system to do X, and then said, there’s no system to do X, and then anticipated the counterargument, but the human brain does X, and replied, yes but I’m talking about systems a human could execute, so “the human brain” does X is not relevant to what I’m talking about. But this is the only place he explicitly said that! So I think when reading him you just have to do that everywhere—prefix “human-” to everything (although the exact meaning of that prefix seems to vary). When he says “system”, he actually means “system a human could execute”. When he says “rationality”, he actually means “how people usually construe rationality”. When he seems to confuse systems of facts and systems of norms, that’s not him getting mixed up, it’s that he’s actually talking about other people’s maps—and in other people’s maps these are often conflated—rather than talking about the territory. Now personally I think this sort of terminology obfuscates rather than clarifies—you could just, you know, explicitly mark when you’re talking about human-X rather than X, or when you’re talking about people’s maps rather than the territory directly—but I think you have to understand it if you want to read Chapman’s writing.
In a lot of his articles Chapman uses the word system with the meaning the term has in developmental psychology and particularly in Kegan’s writing. It’s what Kegan labels as level 4. I don’t think “system that a human could execute” is a gloss that would allow someone without any background to distinguish things that are systems in the developmental psychology sense from things that aren’t.
I think maybe you were thinking of this bit from the post “What they don’t teach you at STEM school”:
I’m pretty sure that’s not the particular one, but thank you all the same!
This one? From the CT-thesis section in A first lesson in meta-rationality.
That sounds like it might have been it?
Or you could explicitly mark when you are talking about impractical-ideal-X. Chapman’s default seems more reasonable to me.
I can’t agree with that, for a number of reasons. Note that the thing that I’m claiming Chapman does is really a number of things which I’ve summed up as “you have to prepend ‘human-’ to everything”, but the meaning of that prefix I’m summing things up with is actually context dependent. Here’s a few examples of what it can mean (if I’m correct—again, if Chapman himself wants to correct me, great!) and why it’s not a good way of talking.
Sometimes this means talking about… certain human patterns, that a particular notion tends to invoke. E.g. “rationality” above—it does indeed frequently happen that those who go in for “rationality” or similar notions end up falling into the Straw Vulcan pattern. And it’s important to be able to discuss these patterns. But it’s a mistake to conflate the pattern itself with the idea that invokes it—especially as there may be multiple of the latter, that are distinct from one another; this is a lossy operation. Better to say “rationality” when you mean rationality, and say “the pattern invoked by rationality” (or in this case, “Straw Vulcanism”, since we have a name for it in this case) when you mean that. Because otherwise how will you tell apart the different ideas that can invoke the Straw Vulcan pattern?
Like, let’s diagram this. The usual approach is that “rationality” (the word) points to rationality (the concept) which then itself has an arrow (via the “invokes in humans” operator) to Straw Vulcanism. If we take the initial arrow from “rationality” to rationality, and alter it instead to point to Straw Vulcanism, how do we refer to rationality? “Idealized Straw Vulcanism?” I don’t think so! Especially because once again which idealization?
The alternative, I suppose, is that we don’t reroute any arrows, but instead just take it as implicit that we’re always supposed to apply “human-” afterward. And, like, use some sort of quotation thingy (e.g. the “idealized-” prefix) when we want to stop that application (like how we use quote marks to indicate that we are mentioning rather than using a word). But even though we’re using “rationality” to talk about Straw Vulcanism, under this way of talking, we have to keep in mind that rationality doesn’t actually mean Straw Vulcanism (even though that’s what we’re using it to mean!) so that when we say “idealized rationality” we know what that means. This… this does not sound like a good way of handling things. I would recommend having words directly point to the thing they refer to.
Sometimes this means talking about the map rather than the territory. Taking “X” not to mean X but to mean “X”, people’s idea of X.
The problem is that, well, most of the time we want to talk about the territory, not people’s maps. If I say “there were no Kuiper belt in 1700” you should say “that is false”, not “that is true, because the idea of a Kuiper belt had not yet been hypothesized”. If I want to say “there was no concept of a ‘Kuiper belt’ in 1700″, I can say that explicitly. Basically this way of talking is in a sense saying, you can’t actually use words, you can only mention them. But most of the time I do in fact want to use words, not mention them!
And again this ends up with similar problems to above, which I won’t detail in full once again. In this case they seem a bit more handleable because there’s not the lossiness issue—the usual way of speaking is to say X in order to use the word “X” and to say “X” in order to mention the word “X”, but one could notionally come up with some bizarre reverse convention here. (Which to be clear I haven’t seen Chapman use—what he says when he actually wants to use a word rather than mentioning it, I don’t know. “The real, actual Kuiper belt?” IDK.) I still don’t think this is a good idea.
The most defensible one, I think, is where it effectively means “humanly realizable”, like with the “system” example above. This one is substantially less bad than the others, because it’s still a bad idea, it’s at least workable. It’s usably bad rather than unusably bad. But I do still think it’s a bad idea. Once again this is a lossy operation—the distinction betwen “nondeterministic” and “chaotic”, that can both get collapsed to “unpredictable in practice”, is worth preserving. And once again to adopt this systematically would require similar contortions to above, even if not as bad; once again I’ll skip the full argument. But yeah, I don’t think this is a good way of talking.