It may be that none of my readers need the lecture at this point, but I’ve learned to be cautious about that sort of thing, so I’ll walk through the difference anyways.
One of my favorite literature professors used to tell me that one should always write under the assumption that each piece one writes is the first piece of one’s work that the reader has encountered. Not only does this make one’s writing more accessible (because odds are there will be someone for whom that is true!), it also helps us to be internally consistent, because we have to summarize our reasoning rather than take shortcuts because we assume our audience already knows.
Not to commit the fallacy of the golden mean or anything, but the two viewpoints are both metatools in the metatoolbox, as it were. You’re better off if you can use both in ways that depend on context and circumstance, rather than insisting that only toolbox reasoning is the universally best context-insensitive metaway to think.
I think you’re committing the fallacy of the golden mean. “Metatools” are still tools, and “metatoolboxes” are still toolboxes. If I’m understanding you correctly, and your point is “Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly”, then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.
Heck, right at the very beginning of this essay, you described the toolbox way of thinking as “[having] a big bag of tools that you can adapt to context and circumstance”, and you used that same wording almost verbatim to state your main argument about metatools and metatoolboxes. So it would appear that you are ultimately arguing in favor of toolbox thinking, yet for some reason saying you’re not. Have I misunderstood something somewhere?
If I’m understanding you correctly, and your point is “Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly”, then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.
Eliezer’s argument in this post is that “toolbox reasoning is the best way to think” is ambiguous between at least three interpretations:
(a) Humans shouldn’t try to base all their daily decisions on a single simple explicit algorithm.
(b) Humans should never try to think in terms of simple, all-encompassing, unconditional, exceptionless rules and patterns, or should only do so when there’s minimal risk of mistaking that rule for a simple-algorithm-you-can-base-every-decision-on.
(c) Humans should rarely try to think in terms of such rules. It’s useful sometimes, but only in weird exceptional cases.
Your point is that (a) is true, and that toolbox thinking therefore “wins”. But this depends on which interpretation we use for “toolbox thinking” — which is a question that doesn’t matter and has no right answer anyway, because “toolbox thinking” is just a phrase Eliezer made up to gesture at a possible miscommunication/confusion, and doesn’t have an established meaning.
Eliezer’s claim, if I understand him right, is that (a) is clearly true, (b) is clearly false, and (c) is very probably false. (c) is the more interesting version of the claim, and the hardest to quickly resolve, since terms like “rarely” are themselves vague and need more operationalization. But a fair number of people do reject something like (a), and a fair number of people do endorse something like (b), so we need to address those views in some way, while being careful not to weak-man people who have more credible and nuanced positions.
If I search for the phrase “toolbox thinking” on LessWrong I find posts like Developmental Thinking Shout-out to CFAR that use it, that suggest to me that it’s not something that Yudkowsky just made up.
In the context of this post David Chapman’s How To Think Real Good doesn’t use the word tool box but it does speak about intellectual tools. When Yudkowsky here uses the term it seems to me that he does gesture towards the argument made in that article.
To me the disagreement seems to be:
Yudkowsky: Thinking of the maze as inherently being an Euclidean object by it’s essential nature is the correct way to think of the maze, even when you might actually use a different algorithm to navigate in it.
Chapman: The maze doesn’t have an essential nature that you can describe as an Euclidean object. It’s an Euclidean object after you apply a specific mental model to it.
Or to move to the more specific disagreement:
Yudkowsky: Reality is probabilistic in it’s essential nature even if we might not have the mental tools to calculate things out with Bayes rule.
Chapman: Probability theory doesn’t extend logic and there are things in reality that logic describes well but probability theory doesn’t, so reality is not probabilistic in it’s essential nature.
One of my favorite literature professors used to tell me that one should always write under the assumption that each piece one writes is the first piece of one’s work that the reader has encountered. Not only does this make one’s writing more accessible (because odds are there will be someone for whom that is true!), it also helps us to be internally consistent, because we have to summarize our reasoning rather than take shortcuts because we assume our audience already knows.
I think you’re committing the fallacy of the golden mean. “Metatools” are still tools, and “metatoolboxes” are still toolboxes. If I’m understanding you correctly, and your point is “Toolbox thinking and lawful thinking are metatools in metatoolboxes, and should be used accordingly”, then you actually are arguing that toolbox reasoning is the universally best context-insensitive metaway to think.
Heck, right at the very beginning of this essay, you described the toolbox way of thinking as “[having] a big bag of tools that you can adapt to context and circumstance”, and you used that same wording almost verbatim to state your main argument about metatools and metatoolboxes. So it would appear that you are ultimately arguing in favor of toolbox thinking, yet for some reason saying you’re not. Have I misunderstood something somewhere?
Eliezer’s argument in this post is that “toolbox reasoning is the best way to think” is ambiguous between at least three interpretations:
(a) Humans shouldn’t try to base all their daily decisions on a single simple explicit algorithm.
(b) Humans should never try to think in terms of simple, all-encompassing, unconditional, exceptionless rules and patterns, or should only do so when there’s minimal risk of mistaking that rule for a simple-algorithm-you-can-base-every-decision-on.
(c) Humans should rarely try to think in terms of such rules. It’s useful sometimes, but only in weird exceptional cases.
Your point is that (a) is true, and that toolbox thinking therefore “wins”. But this depends on which interpretation we use for “toolbox thinking” — which is a question that doesn’t matter and has no right answer anyway, because “toolbox thinking” is just a phrase Eliezer made up to gesture at a possible miscommunication/confusion, and doesn’t have an established meaning.
Eliezer’s claim, if I understand him right, is that (a) is clearly true, (b) is clearly false, and (c) is very probably false. (c) is the more interesting version of the claim, and the hardest to quickly resolve, since terms like “rarely” are themselves vague and need more operationalization. But a fair number of people do reject something like (a), and a fair number of people do endorse something like (b), so we need to address those views in some way, while being careful not to weak-man people who have more credible and nuanced positions.
If I search for the phrase “toolbox thinking” on LessWrong I find posts like Developmental Thinking Shout-out to CFAR that use it, that suggest to me that it’s not something that Yudkowsky just made up.
In the context of this post David Chapman’s How To Think Real Good doesn’t use the word tool box but it does speak about intellectual tools. When Yudkowsky here uses the term it seems to me that he does gesture towards the argument made in that article.
To me the disagreement seems to be:
Yudkowsky: Thinking of the maze as inherently being an Euclidean object by it’s essential nature is the correct way to think of the maze, even when you might actually use a different algorithm to navigate in it.
Chapman: The maze doesn’t have an essential nature that you can describe as an Euclidean object. It’s an Euclidean object after you apply a specific mental model to it.
Or to move to the more specific disagreement:
Yudkowsky: Reality is probabilistic in it’s essential nature even if we might not have the mental tools to calculate things out with Bayes rule.
Chapman: Probability theory doesn’t extend logic and there are things in reality that logic describes well but probability theory doesn’t, so reality is not probabilistic in it’s essential nature.