I’m mostly referring to the way of thinking where you can think of things in terms of computations. Without this, you might have weird ideas about what the mind can do with information, what can constitute a successful map/territory relationship, etc. Sorry, I’m not being very specific here; I just think there are a ton of philosophical errors which boil down to not understanding computation.
Granted, most of the important points are probably already “in the air” from computers playing such a central role in life and society today. People probably don’t need formal information theory to have good intuitions about what information is, today, compared to in the past. But it probably still helps!
I just think there’s something really important about the concept of computation, for philosophy. It seems even more important than materialism, in terms of how it shapes thoughts about a variety of subjects. Like, yeah, algorithmic information theory is pretty great, but as a prerequisite you should be thinking of things in terms of computations, and this to me seems like the more important overall insight.
Given your first suggestion, is the assumption that philosophers would attempt to address everything?
I was just trying to go off of Daniel K’s prompt, which was precisely for philosophers to try to address everything. I agree that this is not obviously the best route.
I went to a relatively backwater undergrad, and personally, I thought the philosophy profs had a big emphasis on thinking clearly. My Epistemology class was reading a bunch of articles (ie the textbook did nothing to summarize results, only presenting the original texts); but, class was all about dissecting the arguments, not regurgitating facts (and only a little about history-of-philosophy-for-history’s-sake).
Side note, the profs I talked to also thought philosophy was pretty useless as a subject (like objectively speaking society should not be paying to support their existence). I think they thought the main saving grace was that it could be used to teach critical thinking skills.
Possibly, this is just very different from grad programs in philosophy.
Plus, they almost never collect any actual data on whether these methods work.
So, it’s simply not the case that one’s approach to philosophy is that contingent on the past, or an extreme focus on literature reviews.
True, but I perceive room for them to be even less shy, and I stand by my earlier speculation. (I’ve read enough philosophy to know what Lance Bush was pointing at.)
Oh, it doesn’t have to be the general public. I doubt it would be. You could select good judges.
I think a perhaps-more-practical version of “work thru li & vitanyi” is “learn computer science”—eg, the book “logic and computability” might be a good text for philosophers. (It is thorough and technical, but introductory.)
Pirsig draws a contrast with music students, whose studies consist primarily of developing their skill at their instrument, not musicology, whereas philosophy students never do philosophy at all, only philosophology.
The contrast with music seems misleading. Almost any other field is full of studying the past! You don’t primarily “learn physics” by going and doing experiments; you mainly learn it by studying what others have already done.
Granted, physicists read new textbooks summarizing the old results, while philosophers more often read the original material. That’s a pretty big difference. However, that might be because philosophy is more directly about the critical thinking skills themselves (hence you want to read how the original philosopher describes their own insight), while physics is more just about the end results of that process.
It is common for phd theses to have a very large literature review. How over-the-top is philosophy in this, really? I would guess that many “humanities” areas are similarly heavier on the lit review. (Although, you could plausibly accuse those areas of the same dysfunction you see in philosophy.)
Formal debate is really really terrible in practice as a way to train anything resembling good philosophy, or else I’d think that a pretty good suggestion.
Specifically, all forms of debate devolve into speed-talking contests, because if you make a point that your opponent doesn’t oppose, then they’re considered by the judges to have conceded that point; so you want to make as many points as humanly possible in the time allotted. Aside from that, the game is all about coming up with clever argumentative maneuvers that have little to do with what arguments would work in real life and nothing al all to do with the truth.
Multiple attempted reforms of debate rules to get around these problems have failed, producing essentially the same result.
With respect to the distinct skills in “philosophy”—I’m not so sure. I think maybe philosophy has correctly divided itself into subfields such as ontology, ethics, and epistemology. These subfields address different sets of questions, but use similar/identical “philosophical method” in doing so. This suggests that a common set of skills are involved in many philosophical pursuits, somewhat unlike the sports analogy.
Granted, I do suspect that there’s a list of skills, which might best be trained separately. Here is an attempt to list what they might be:
Maybe just come up with “is a hotdog a sandwich” type questions for lots of everyday concepts?
Hypothesis generation. Given a set of “data” (usually from intuition—eg, cases where someone does, or doesn’t, seem to be behaving morally) generate a hypothesis which fits the data (eg, a theory of morality).
This might be trained by trying to come up with dictionary definitions of foreign words, given only examples.
Another exercise could involve improving dictionary definitions of familiar words.
Counterexample generation: strike down a theory by coming up with a case which clearly goes against it.
Give counterexamples to dictionary definitions.
Counterexamples in mathematics—give negative examples for false conjectures.
Just, like, a whole lot of critiquing each other’s theories.
Argumentation: clearly, precisely, and convincingly express a philosophical view, supporting it with good reasoning and avoiding missteps (eg fallacies).
Training in formal logic and other valid methods of inference such as probability and statistics.
Fallacies and biases.
Lots of writing practice with detailed critiques for clarity, accuracy, and persuasiveness.
The above three skills seem to be a bit overly anchored to a specific way of doing philosophy for my taste, but there you have it.
To turn this into a training technique, we might:
Have a big list of questions, which approximates all the big questions in philosophy.
Try to answer all of them from a plausible, coherent perspective.
Get feedback on how coherent (and plausible) the perspective is, how well-argued the answers were, etc.
Or, perhaps, a courtroom-like examination process where a committee selects a line of questioning? (Roughly, draw some questions randomly off of the Big List to try to catch the student off-guard, and then depending on the student’s answers, go down a line of questioning which best searches for flaws in the view?)
To some extent, I know I’ve gotten better at philosophy simply by finding that my beliefs have changed, and my new justifications clearly seem much better-grounded than my old. This doesn’t work as a general tool (obviously it overly praises those who come to strong convictions, since they will rate their new beliefs extremely favorably), but it’s far more than nothing.
It seems to me that the regard of colleagues would, actually, be a useful signal as well (even if problematic for similar reasons).
However, I’m far more fond of mathematical philosophy, where it is easier to see whether you’ve accomplished something (have you proven a strong theorem? have you codified useful mathematical structures which capture something important? these are subjective questions, but, less so).
If philosophers would be a bit less shy about their reliance on intuition, perhaps they could openly admit that they are relying on their own personal intuition without projecting it on anyone else. There’s nothing shameful about analyzing one’s personal intuitions, for one’s own benefit and for the benefit of others. For example, I am happy to read someone like Russel or Descartes examining their own intuitions. Someone’s intuitions can be interesting, and can be a source of insight!
But philosophers seem to have a pretty strong tendency to try and sound more authoritative, stating something as a generally-shared intuition.
That’s all up to the judges, audience, and participants. If you took a typical comedy crowd, then of course you’d basically just get comedy out, with maybe a bit of a philosophical twist. If academic philosophers started doing stand-up philosophy with each other, then you’d get something else. LWers would get yet a third thing.
If we assume that the judges are the best we could select, well, then you still get some distortion from the fact that they probably have to judge fairly quickly and will be prone to certain human biases (eg judging attractive people more highly).
I think it’s totally desired to have some originality bias; accuracy, precision, and convincingness are legitimately not worth much without originality. OTOH, yeah, this can train some bad habits.
Most judges (in my very limited knowledge) don’t go completely on gut feeling, but rather, have a rubric. For example, judges might rate contestants on Originality, Accuracy, Precision, and Convincingness and add up the scores. Or subtracting points for each standard bias/fallacy, or whatever. This sort of thing can help avoid overly skewed judging.
Should this tag only posts specifically tackling deliberate practice as a topic in itself, or also posts discussing the application of deliberate practice to X skill?
A friend of mine once suggested a stand-up-comedy type model for philosophy (“stand-up philosophy”). I think this could have some good dynamics. Imagine philosophers competing to blow the minds of the audience and judges.
Sorry, here’s another attempt to convey the distinction:
Possible belief #1 (first bullet point):
If we perform the causal intervention of getting someone to put on sunscreen, then (on average) that person will stay out in the sun longer; so much so that the overall incidence of skin cancer would be higher in a randomly selected group which we perform that intervention on, in comparison to a non-intervened group (despite any opposing beneficial effect of sunscreen itself).
I believe this is the same as the second interpretation you offer (the one which is consistent with use of the term “net”).
Possible belief #2 (second bullet point):
If we perform the same causal intervention as in #1, but also hold fixed the time spend in the sun, then the average incidence of skin cancer would be reduced.
This doesn’t flatly contradict the first bullet point, because it’s possible sunscreen is helpful when we keep the amount of sun exposure fixed, but that the behavior changes of those with sunscreen changes the overall story.
That last bit (which, I confess, I hadn’t actually noticed before) doesn’t say that “wearing sunscreen actually tends to increase risk of skin cancer”.
I agree. I think I read into it a bit on my first reading, when I was composing the post. But I still found the interpretation probable when I reflected on what the authors might have meant.
In any case, I concede based on what you could find that it’s less probable (than the alternatives). The interviewee probably didn’t have any positive information to the effect that getting someone to wear sunscreen causes them to stay out in the sun sufficiently longer for the skin cancer risk to actually go up on net. So my initial wording appears to be unsupported by the data, as you originally claimed.
But I don’t think your interpretation passes the smell test. If whoever wrote that page really believed that the overall effect of wearing sunscreen was to increase the risk of skin cancer (via making you stay out in the sun for longer), would they have said “We recommend sunscreen for skin cancer prevention”?
It’s pretty plausible on my social model.
To use an analogy: telling people to eat less is not a very good weight loss intervention, at all. (I think. I haven’t done my research on this.) More importantly, I don’t think people think it is. However, people do it all the time, because it is true that people would lose weight if they ate less.
My hypothesis: when giving advice, people tend to talk about ideal behavior rather than realistic consequences.
More evidence: when I give someone advice for how to cope with a behavior problem rather than fixing it, I often get pushback like “I should just fix it”, which seems to be offered as an actual argument against my advice. For example, if someone habitually stopped at McDonald’s on the way home from work, I might suggest driving a different way (avoiding that McDonald’s), so that the McDonald’s temptation doesn’t kick in when they drive past it. I might get a response like “but I should just not give in to temptation”. Now, that response is sometimes valid (like if the only other way to drive is significantly longer), but I think I’ve gotten responses like that when there’s no other reason except the person wants to see themselves as a virtuous person and any plan which accounts for their un-virtue is like admitting defeat.
So, if someone believes “putting on sunscreen has a statistical tendency to make people stay out in the sun longer, which is net negative wrt skin cancer” but also believes “all else being equal, putting on sunscreen is net positive wrt skin cancer”, I expect them to give advice based on the latter rather than the former, because when giving advice they tend to model the other person as virtuous enough to overcome the temptation to stay out in the sunlight longer. Anything else might even be seen as insulting to the listener.
(Unless they are the sort of person who geeks out about behavioral economics and such, in which case I expect the opposite.)
I think your modified wording is better, and wonder whether it might be improved further by replacing “correlated” with “associated” which is still technically correct (or might be? the Australian study above seems to disagree) and sounds more like “sunscreen is bad for you”.
I was really tempted to say “associated”, too, but it’s vague! The whole point of the example is to say something precise which is typically interpreted more loosely. Conflating correlation with causation is a pretty classic example of that, so, it seems good. “Associated” would still serve as an example of saying something that the precise person knows is true, but which a less precise person will read as implying something different. High-precision people might end up in this situation, and could still complain “I didn’t say you shouldn’t wear sunscreen” when misinterpreted. But it’s a slightly worse example because it doesn’t make the precise person sound like a precise person, so it’s not overtly illustrating the thing.
I guess that makes sense. I was just not sure whether you meant that, vs the reverse (my model of postrats is loose enough that I could see either interpretation).
My impression was that equivocation typically refers to cases with articulated words, and bucket errors refers more to more abstract cases where the issue is a bit more in concepts than in the stated vocabulary. I’m curious if there is a decent definition of equivocation out there to make it a bit more clear what the specific boundaries are.
I don’t disagree, but I’m assuming that the brain uses some signals to model what’s going on, and by calling bucket errors “equivocation” I’m treating that internal symbology as a language. So, yes, we can make a distinction between buckets and equivocation by making equivocation all about externally articulated words. But to the extent that equivocation can happen in anything we can treat as a sort of language, I feel justified in grouping them together.
Similarly, I’m curious if other readers here agree that equivocation is the same thing as “bucket errors”. If so, I kind of would like to replace ongoing discussion of “bucket errors” with equivocation instead. I’m curious myself because I’m sure I’ll want to refer to the concept in the future, but am not sure which word to use. I’ve used “bucket error” here, recently, for instance.
I’m not sure. It might be useful to have a separate term for “equivocation broadly construed” (verbal equivocation + bucket errors + buddhist identification + psycotherapy’s fusion), and then use the more specific terms when you want to be more specific. Each term has, at least, slightly different connotations.
By the way, one thing I only addressed very briefly in the essay: they way “fusion” and “identification” are normally explained, one would think that they primarily/exclusively refer to equivocation between some X and the self. I think this is due to a confused ontology. For example, a central example of fusion is getting caught up in anger, so that angry actions seem necessary. De-fusion would be moving from “Frank is an idiot who needs to be punched in the face” to “I am feeling angry at Frank right now”. This is obviously a map/territory sort of distinction, plus a temporal distinction. Yet it often gets explained as a self-vs-other distinction. I think this is a result of an overzealous application of object-vs-subject. To make the map/territory distinction, or even the temporal distinction, one must “take the anger as an object”: sort of “see it from the outside”, create a token in working memory which refers to it. Psychologists then make the leap that because we can call this “taking the anger as an object”, and the opposite of “object” is “subject”, it must have been “taken as subject” before.
And that’s not even total nonsense? Taken as a definition I’m OK with it: “Taking something as subject means it’s a fact about you, but which you haven’t generated an internal symbol for yet”.
But I think people then confuse it with somehow moving a symbol from self to non-self status, like, treating the anger as something inside you vs an outside force. This is also a thing. Maybe even a thing that’s worth throwing in the same cluster! But IMHO, it’s a much more complicated phenomenon. I don’t think I want to take it as the defining feature or even the central case of a cluster.
I think buddhists are doing almost exactly the same thing, with “identification”. (At least, american buddhists.) The way the phrase is used, you identify with something, rather than identifying two things with each other (such as map and territory). Is the state of no-self one where all symbols are moved out of the “self” box, and into the “other” box? Or is no-self a state where facts about the self can be fluently symbolized? (So that the gap of time between being angry and noting “I am angry” is very small, making anger easier to appropriately respond to.) The first sounds like a psychological trick: disassociating to reduce suffering. The second sounds like an actual cognitive skill.