DM me anything
LVSN
(surprised) No way!! I bought that book three months ago, at the recommendation of no one. I haven’t read it yet, but it’s good to see that I have made a good investment on my own judgment.
I just love this quote. (And, I need it in isolation so I can hyperlink to it.)
“When I step back in Newcomb’s case, I don’t feel especially attached to the idea that it the way, the only “rational” choice (though I admit I feel this non-attachment less in perfect twin prisoner’s dilemmas, where defecting just seems to me pretty crazy). Rather, it feels like my conviction about one-boxing start to bypass debates about what’s “rational” or “irrational.” Faced with the boxes, I don’t feel like I’m asking myself “what’s the rational choice?” I feel like I’m, well, deciding what to do. In one sense of “rational” – e.g., the counterfactual sense – two-boxing is rational. In another sense – the conditional sense — one-boxing is. What’s the “true sense,” the “real rationality”? Mu. Who cares? What’s that question even about? Perhaps, for the normative realists, there is some “true rationality,” etched into the platonic realm; a single privileged way that the normative Gods demand that you arrange your mind, on pain of being… what? “Faulty”? Silly? Subject to a certain sort of criticism? But for the anti-realists, there is just the world, different ways of doing things, different ways of using words, different amounts of money that actually end up in your pocket. Let’s not get too hung up on what gets called what.”
— Joe Carlsmith
Someone (Tyler Cowen?) said that most people ought assign much lower confidences to their beliefs, like 52% instead of 99% or whatever.
oops I have just gained the foundational insight for allowing myself to be converted to (explicit probability-tracking-style) Bayesianism; thank you for that
I always thought “belief is when you think something is significantly more likely than not; like 90%, or 75%, or 66%.” No; even just having 2% more confidence is a huge difference given how weak existing evidence is.
If one really rational debate-enjoyer thinks A is 2% likely (compared to the negation of A, which is at negative 2%), that’s better than a hundred million people shouting that the negation of A is 100% likely.
Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don’t answer that; the negation is 55% likely.
But we can imagine making a judgment about someone’s personality. One human person accepts MBTI’s framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of course; being a feeler and/or a thinker are two independent questions, buddy.
The models in a person’s mind are predictable from the estimate on his paper, and while his estimate may be true, the models the predictions stem from may be deeply flawed.
By the logic of personality taxonomy and worldly relations, “the negation of A” has many connotations.
Maybe the trouble is with the words ‘negation’, ‘opposite’, and ‘falsehood’ instead of using the word ‘absence’. Presence of falsehood evidence is not the same as absence of truth evidence, even if absence of truth evidence is one kind of weak falsehood evidence to be present.
My shortform post yesterday about proposition negations, could I get some discussion on that? Please DM me if you like! I need to know if and where there’s been good discussion about how Bayesian estimate tracking relates with negation! I need to know if I’m looking at it the wrong way!
One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU=”(Model Uncertainty) I’m confused, maybe the question doesn’t make sense, maybe A isn’t a coherent claim, maybe the concepts I used aren’t the right concepts to use, maybe I didn’t think of a possibility, etc. etc.”.
I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%And so on. Are you telling me that “~A: 75%” means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect ‘A: 75%’ to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can’t a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?
If you think it’s very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you’ll come up with better interpretations for the thing you think you’re confident about) on just so many interpretations before you consider more ambitious numbers like 90%.
If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don’t pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.
Would it be better or worse if someone’s takeaway from this post was that no one should reason about what makes a course of action or policy better or worse? That they should just copy other people?
What if copying other people meant burning suspected witches alive? What if some people who burn witches aren’t really sure about the correctness of what they’re doing, they care about that kind of thing, and yet they profess great certainty that their acts are in accordance with correct values? Should I not try to play to the part of them which is uncertain in order to prevent a cruel outcome, against their initial value statement?
Would you want me to try to give you values which aren’t upstream from genocide, if you were born in a place which gave you values that were upstream from genocide?
The topics which are being invoked are way more fraught than is being implied. It’s not obvious that you’re not sneaking in a takeaway for a general topic by using this one question in a case where the takeaway doesn’t generalize across the space of questions in that topic. On good faith, I don’t assume you’re trying to do that, but it’s good to check; lampshade the possibility.
Most people talk a lot about how they hate hypocrites. Hypocrites say you’re supposed to do one thing, and then they do another, and people don’t like that. I can understand admitting that it is hard to live in accordance with your stated standards, but people shouldn’t lie that they believe there is anything plausibly contextually good about a standard when they don’t actually believe there is anything plausibly contextually good about the standard. Otherwise you can’t hold people accountable to the standards that both of you say you think have something maybe-good about them.
Of course, there is an important distinction between lying and simulacrum level 3, wherein everyone understands the situation. People who aren’t in on the simulacrum level 3 shouldn’t be punished for wanting to understand the inconsistency. Once the inconsistency is explained, there is no problem. The explanation should be open for everyone to see, so as not to discriminate against those who still don’t know. I don’t think it’s autistic to be unaware of the reasons for every weird inconsistency between word and action, and it’s definitely not autistic to ask about them.
No one is simulacrum-3-omniscient, and everyone is born with very little knowledge of simulacrum 3 situations. It would be poorly calibrated to expect a consistent flow of uninterrupted simulacrum 3 stability given how little most people know.
I was never a fan of this advice to remove all reference to the self when making a statement. If you think everything is broken or complicated and you don’t think you have strong reasons to think you’re doing any better than average, why pretend that everything is fine and we can just be authorities on the way that things are rather than how they impressed us as being?
My English teacher took off grades every time I explained things as though from a perspective as humble and precarious as honest, good epistemics require me to report; using terms like “I think” and “it may be the case”.
Now, I could understand if the idea was that no one knew anything and we were all just roleplaying and that school was there to teach me to roleplay. But her defense against my skepticism towards non-subjective reporting was, and I quote, “there’s a system for how these things work; it’ll be explained in later grades.”
It was at that time I was getting truly fed up with my educators. I will not lie about my confidence in my authority.
In defense of strawmanning: there’s nothing wrong with wanting to check if someone else is making a mistake. If you forget to frame it as a question, e.g. “Just wanna make sure: what’s the difference between what you’re thinking and the thinking of what my more obviously bad, made-up person who speaks similarly to you?” Then the natural way it comes out will sound accusatory, as in our typical conception of strawmanning.
I think most people strawman because it’s shorthand for this kind of attempt to check, but then they’re also unaware that they’re just trying to check, and they wind up defending their (actually accidental) apparent hostility, and then a polarization happens.
Strawmanning happens when we take others’ judgments as plausible evidence of more general models and habits that those judgments play a part in. By asking for clarity of what models inform a judgment, we can get better over time at inferring models from judgments. It can become a limited form of mind reading.
Interesting stuff from the Stanford Encyclopedia of Philosophy:
2.8 Occam’s Razor and the Assumption of a “Closed World”
Prediction always involves an element of defeasibility. If one predicts what will, or what would, under some hypothesis, happen, one must presume that there are no unknown factors that might interfere with those factors and conditions that are known. Any prediction can be upset by such unanticipated interventions. Prediction thus proceeds from the assumption that the situation as modeled constitutes a closed world: that nothing outside that situation could intrude in time to upset one’s predictions. In addition, we seem to presume that any factor that is not known to be causally relevant is in fact causally irrelevant, since we are constantly encountering new factors and novel combinations of factors, and it is impossible to verify their causal irrelevance in advance. This closed-world assumption is one of the principal motivations for McCarthy’s logic of circumscription (McCarthy 1982; McCarthy 1986).
3. Varieties of Approaches
We can treat the study of defeasible reasoning either (i) as a branch of epistemology (the theory of knowledge), or (ii) as a branch of logic. In the epistemological approach, defeasible reasoning can be studied as a form of inference, that is, as a process by which we add to our stock of knowledge. Alternatively, we could treat defeat as a relation between arguments in a disputational discourse. In either version, the epistemological approach is concerned with the obtaining, maintaining, and transmission of warrant, with the question of when an inference, starting with justified or warranted beliefs, produces a new belief that is also warranted, given potential defeaters. This approach focuses explicitly on the norms of belief persistence and change.
In contrast, a logical approach to defeasible reasoning fastens on a relationship between propositions or possible bodies of information. Just as deductive logic consists of the study of a certain consequence relation between propositions or sets of propositions (the relation of valid implication), so defeasible (or nonmonotonic) logic consists of the study of a different kind of consequence relation. Deductive consequence is monotonic: if a set of premises logically entails a conclusion, than any superset (any set of premises that includes all of the first set) will also entail that some conclusion. In contrast, defeasible consequence is nonmonotonic. A conclusion follows defeasibly or nonmonotonically from a set of premises just in case it is true in nearly all of the models that verify the premises, or in the most normal models that do.
The two approaches are related. In particular, a logical theory of defeasible consequence will have epistemological consequences. It is presumably true that an ideally rational thinker will have a set of beliefs that are closed under defeasible, as well as deductive, consequence. However, a logical theory of defeasible consequence would have a wider scope of application than a merely epistemological theory of inference. Defeasible logic would provide a mechanism for engaging in hypothetical reasoning, not just reasoning from actual beliefs.
Googling ‘McCarthy Logic of Circumscription’ brought me here; very neat.
I found the Defeasible Reasoning SEP page because I found this thing talking about defeasible reasoning, which I found because I googled ‘contextualist Bayesian’.
I am convinced that moral principles are contributory rather than absolute. I don’t like the term ‘particularist’; it sounds like a matter of arbitration when you put it that way; I am very reasonable about what considerations I allow to contribute to my moral judgments. I would prefer to call my morality contributist. I wonder if it makes sense to say that utilitarians are a subset of contributists.
Lately I’ve been thinking about what God would want from me, because I think the idea was a good influence on my life. Here’s a list in progress of some things I think whould characterize God’s wants and judgments:
1. God would want you to know the truth
2. If you find yourself flinching at knowledge of serious risk factors (e.g. of your character or moral plans), God would urgently want to speak with you about it
3. Resist the pull of natural wrongness
3.1. Consider all of the options which are such that you would have to be looking for the obvious/common sense options in order to find them
3.2. Consider many non-obvious options; consider that the right thing to do is a different concretization of an abstracted version of the wrong thing to do, is adjacent to the wrong or seemingly-right thing to do, queers the seemingly-right or wrong thing to do, or is a thing in a category which cuts sideways through categories of abstractly right or wrong things to do
3.3. Every night, go over a list in progress of cognitive biases and search your memories and feelings honestly as to whether you gave into any of them
4. By one third of the set of good definitions of ‘making progress’ that you can come up with, or by no more than six good definitions out of eighteen, make it 80% true about you that you are making progress; don’t be going nowhere
4.1. On an average rate of twice every five days, do a good day’s work
4.2. On an average rate of once every three weeks, spend a day working really hard
4.3. For every extra amount of work beyond the rates specified above, God will be extra proud of you, which can become a source of great esteem and comfort.
5. Reward yourself temperantly for making progress and resisting the pull of natural wrongness; your morality should be as an enlightened, wiser-than-you friend who you eagerly wish you were strong enough to follow; not a slaveholder making you regret your acquaintanceship.
6. In your life, always be faithful and reliable to at least one great moral principle; have one moral job or nature that God will consider you remarkable for
7. Recognize the vulnerability of others as unsettingly reminiscent of the vulnerability in yourself
Feel free to leave suggestions for more entries; aim for excellence, and if you feel honestly that your suggestion is excellent in spite of acknowledged strong possibilities that it may be subjective and biased, don’t hesitate to share. Or, hesitate the right amount before sharing; either is good.
My response to it is: What makes you think it is naive idiocy? It seems like naive intelligence if anything. Even if the literal belief is false, that doesn’t make it a stupid thing to act as if true. If everyone acted as if it were true, it would certainly be a stag-hunt scenario! And the benefits are still much worthwhile even if the other does not perfectly cooperate.
Stupid uncritical intolerant people will think you look childish and impertinent, but intelligent people will notice you’re being bullied and you’re still tolerating your interlocutor, and they will think you’re super-right. You divide the world into intelligent+pro-you and stupid+against-you.
Also I might note that your attempted counter-example has an implied tone which accuses naive idiocy, rather than sounding curious with salient plausibility. The saliently plausible thing, in your attempted counter-example, is an implicit gesture that there is not a difference.
What is normally called common sense is not common sense. Common sense is the sense that is actually common. Idealized common sense (which, I shall elaborate, is the union of the set of thoughts you would have to be carefully trying to be common-sensical in order make salient in your mind and the set of natural common sense thoughts) should be called something other than common sense, because making a wide-sweeping mental search about possible ways of being common-sensical is not common, even if a general deference and post-hoc accountability to the concept of common sense may be common.
These two possibilities are not mutually exclusive; talking is a thing that people do. The correct answer is that it’s the latter case (verbal theory) as an instance of the former category of cases (cases where people copy the behavior of others, such as fashions of thinking and talking).
I’m also not very sure that removing the ability to negotiate theories of objectivity or fairness, which are naturally controversial subjects, would make people more peaceful on average given it as a limiting condition on the deveopment of culture starting with the first appearance of any human communication; I expect it would make world histories more violent on average to remove such an ability.
Some subset of those who agree that ‘when two people disagree, only one of them can be right’ and the people who agree that A : A := ‘when two people disagree, they can both be right’ such that A ≈ A’ and A’ := ‘when two people “disagree,” they might not disagree, and they can both be right’, do not have a disagreement that cashes out as differences in anticipated experiences, and therefor may only superficially disagree.
Note 1: in order for this to be unambiguously true, ‘anticipated experiences’ necessarily includes anticipated experiences given counterfactual conditions.
Note 1.1: Counterfactuals are not contrary to facts; they have attributes which facts can also share, and, under varying circumstances, the ratio of [the set of relevant shared attributes] to [the set of relevant unshared attributes] between a counterfactual situation and known situation may be sufficiently large that it becomes misleading to characterize the situations as [opposite] or [mostly disagreeing, as opposed to mostly agreeing]. A more fitting word would be ‘laterofactual’.
Note 1.1.1: When people say that B : B := ‘C and D disagree’, the set of non-excluded non-[stupidly interpretable] implicatures of the statement B includes that E : E := ‘C and D mostly disagree’, and not only F : F := ‘C and D have any amount of disagreement’.