“And the difference in graduate training in the two programs is, HPS you come in, write some papers, get out in 6-8 years, get a job, everybody does that. The Pitt Philosophy program you come, think some things, try to think the deep thoughts; the very best people go on to an awesome career, the rest of you, well, we’re happy to burn through a hundred grad students to find a diamond.”—I found this passage surprising. I’d expect that the ease of finding a job in an area such as philosophy or HPS would be based on the availability of funding, not differences in approach.
I really dislike the fiction that we’re all rational beings. We really need to accept that sometimes people can’t share things with us. Stronger: not just accept but appreciate people who make this choice for their wisdom and tact. ALL of us have ideas that will strongly trigger us and if we’re honest and open-minded, we’ll be able recall situations when we unfairly judged someone because of a view that they held. I certainty can, way too many times to list.
I say this as someone who has a really strong sense of curiosity, knowing that I’ll feel slightly miffed when someone doesn’t feel comfortable being open with me. But it’s my job to deal with that, not the other person.
Don’t get me wrong. Openness and vulnerability are important. Just not *all* the time. Just not *everything*.
Thanks for writing this comment. I agree with you that simulcra levels and the unnamed object level vs social reality grid should ideally be separated as concepts. Also thanks for saving me the effort of adding my own theory here (I was planning to eventually, but I have a tendency to procrastinate). Anyway, I’ll just add that the main purpose of my characterisation was to try to explore some of the religious language that Baudrillard was using.
I like the general idea, but I’d be wary of venturing so far in terms of privacy that the usability becomes terrible and no-one wants to use it.
Interesting. I like the grid model and in some ways it is more natural than the four seperate levels.
“”Bad” requires defining. Define the utility function, and the answer falls out”—Exactly. How should it be defined?
I guess there is The Motte on Reddit, but I could see benefits of someone creating a separate community. One problem is that far more meta discussion needs to occur on how to have these conversations.
One thing this leaves out is how pragmatism contains the risk that you are completely misunderstanding what is going on. Sometimes the risk is worth it, other times it isn’t, although it is hard to tell in advance.
Maybe I should have said that there two sides to Ra—the institutional incentive and the reason why people fall for this or (stronger) want this
I’m really keen to see the later posts in this series, since Lou’s posts are often somewhat tricky to decrypt.
I formed my own opinion at the start, but I didn’t post it right away since I didn’t want to possibly bias other people into agreeing with me. I guess the way I’ll answer this will be slightly different from the other answers, since I think the dynamics of the situation are more complex than an idealisation of vagueness. Pjeby seems hotter(/closer) in estimation when they say it’s a preference for mysterious, prestigious authority, but again I think we have to dive deeper.
I see Ra as a dynamic which tends to occur once an organisation has obtained a certain amount of status. At that point there is an incentive and a temptation to use that status to defend itself against criticism. One way of doing that is providing vague, but extremely positive-sounding non-justifications for the things that it does and use the status to prevent people from digging too deep. This works since there are often social reasons not to ask too many questions. If someone gives a talk, to keep asking followups is to crowd out other people. People will often assume that someone who keeps hammering a point is an ideologue or simply lose interest. In any case, these can usually be answered with additional layers of vagueness.
This also reminds me of the concept of hyperreal or realer than real. Organisations that utilise Ra become a simulation of a great organisation instead of the great organisation that they might have once been. By projecting this image of perfection they feel realer than any real great organisation which will inevitably have its faults and hence inspire doubt.
Great to hear that this article helped you
Oh, one more thing I forgot to mention. This idea of Conceptual Engineering seems highly related to what I was discussing in Constructive Definitions. I’m sure this kind of idea has a name in epistemology as well, although unfortunately, I haven’t had the time to investigate.
Thanks for writing this post. Better connecting the discussion on Less Wrong with the discussions in philosophy is important work.
Also, how is the idea of conceptual engineering different from Wittgenstein’s idea of language as use?
Why do you say it isn’t an emotional state?
I’ve always found the concept belief in belief slightly hard to parse cognitively. Here’s what finally satisfied my brain: whether you will be rewarded or punished in heaven is tied to whether or not God exists, whether or not you feel a push to go to church is tied to whether or not you believe in God. If you do go to church and want to go your brain will say, “See I really do believe” and it’ll do the reverse if you don’t go. However, it’ll only affect your belief in God indirectly through your “I believe in God” node. Putting it another way, going to church is evidence you believe in God, not evidence that God exists. Anyway, the result of all this is that your “I believe in God” node can become much stronger than your “God exists” node