Even if it were 1⁄10, it might be the most important 1⁄10. Something like that is in fact plausible: if someone were optimally trying to mostly look factual while pushing a political agenda, they would probably sort statements by ratio of [political benefit of lying] / [expected cost of being caught lying], pick a threshold, and lie whenever that ratio exceeds the threshold; and political benefit, as evaluated by this hypothetical journalist-hack, likely correlates with importance to the reader.
localdeity
I would put it this way: being vulnerable is a probably-unfortunate side-effect of a means to an end, not an end in itself, and it’s usually worth tracking just what end you have in mind. (And, yes, if you had a cost-free alternative means that achieved the same result but didn’t make you vulnerable, then that would be an improvement.) For example: “telling someone a secret that would enable them to shame you for it”, or “letting yourself rely on someone else to take care of a thing for you [such that if they fell through, it would hurt you]”, or “letting yourself care about someone else’s judgment of you”. There are situations where each of these is an unavoidable part of a plan with positive expected value, and situations where they create needless risks with no benefit.
Let’s see if I can capture the good parts of potential counter-stances:
It is imaginable that someone is so afraid of the negative consequences that they can’t really think rationally about them, in which case it’s plausibly good to deliberately create those situations in relatively safe circumstances and teach your brain that it’s actually not that bad (“exposure therapy”).
Here, you want situations in which your brain thinks you’re much more vulnerable than your rational mind believes. Having picked a situation to expose yourself to, you’d like the actual chance of getting hurt to be as close to zero as possible.
You could deliberately make yourself vulnerable to person X’s actions for the purpose of evaluating person X: can you actually trust them to treat you well?
In this case, the important thing is that person X believes you’re vulnerable and has the opportunity to do something about it. If you are secretly emotionally ironclad, or fully able to intercept/punish X’s misbehavior, or whatever, then so much the better. It could be good to find ways to make yourself look more vulnerable than you are, to enable this kind of testing. On the other hand, fooling them in this way could either be impossible or carry its own risks.
On “shame-able secrets”, it can be ideal to take a stance of, “Some people will shame me and others don’t care. I’m fine with cutting the first group out of my life and dealing solely with the second group. This is most efficiently accomplished by being completely open about this secret (and possibly deliberately broadcasting it).”
In this case, after the initial phase of dealing with the haters you already know, you have made yourself invulnerable to any future social shaming for this secret. And since you’re erring on the side of filtering out any new haters before they become socially important to you, you’re essentially doing your best to keep yourself invulnerable.
There are also lesser versions of this, where you “out” yourself to a particular group (such as writing about your fetishes to a niche online forum), or even a single person, to “get it over with” at a time and place of your choosing.
There are also greater versions of this, where you do it with lots of “shame-able secrets”. There tend to be “economies of scale” here: there tends to be overlap from one secret to the next among the people you cut out, the steps you take to prepare yourself for the fallout, etc.
I think the conclusion stands: in all circumstances, actual vulnerability is something you’d like to minimize; sometimes it’s correct to do things that look like seeking out vulnerability, but on closer examination you’re always seeking out something else that happens to be correlated (or to look like it’s correlated) with actual vulnerability, which is something you tolerate if, and only if, there aren’t better choices.
Suggestion for how to pose the first problem: “Imagine that someone places a large number of mirrors around the Earth so that the same sunlight hits the earth as before, but now it lands evenly spread around the earth’s surface instead of hitting the daytime half of the earth.” And, probably: “The mirrors don’t reflect any of Earth’s radiation back onto the earth.”
Protip: replace the “x” in the URL with “xcancel”. Currently works well.
It seems like the move for height would be to make men taller and women shorter? Or to increase the sexually dimorphic difference, and separately pick the optimal average.
An argument in favor of it is, “free association” is inherently a fuzzy human thing, where the process is just thinking for a bit and seeing what you come up with and at some point declaring victory; there is nothing in it that could possibly guarantee correctness. Arguably, anyone who encounters the term should be conscious of this, and therefore notice that it’s an inappropriate step in a logical argument that purports to establish high certainty. Perhaps even notice that the term itself is paradoxical: in a logical context, “exhaustion” must be a rigorous process, but “free association” is inherently unrigorous.
I’m not sure if I buy the argument. The author of “The Design of Everyday Things” warns against being too clever with names and assuming that normal people will get the reference you intend. But… I dunno.
Asking Claude for examples
There are languages out there in which the epistemic status is a mandatory part of a sentence (“I’ve seen”, “so I have been told”, “I think”, etc.)
Is this true? Examples?
Yes, this is absolutely true! These grammatical features are called evidentials or evidentiality markers, and they’re mandatory in many languages around the world.
Examples:
Turkish is a classic example. It has two past tenses that encode evidentiality:
-di (direct/witnessed past): “I saw it happen”
-miş (indirect/inferred past): “I heard/inferred it happened”
So “geldi” means “he came” (and I witnessed it), while “gelmiş” means “he came” (but I didn’t see it—I heard about it or saw evidence of it).
Quechua (Andean languages) has several evidential markers:
-mi: direct knowledge
-si: reported/hearsay
-chá: conjecture
Tibetan has a rich evidential system distinguishing personal knowledge, inference, and hearsay.
Japanese has evidential markers too, though they’re sometimes considered less grammaticalized:
rashii: hearsay (“I heard that...”)
yōda: inference based on evidence
sōda: reported information
Bulgarian and other Balkan languages have “renarrative” mood marking information learned from others rather than directly experienced.
The key point is that in these languages, you often cannot make a statement about a past event without indicating your epistemic basis for the claim. Leaving it out would be grammatically incomplete, like forgetting to conjugate a verb in English. This forces speakers to constantly track and communicate their source of knowledge.
Interesting.
Hmmph. If he wants to push people to do more research so that they can make statements without any such qualifiers—or to shut up when they haven’t done enough research to have anything useful to say—then I may sympathize. If he wants them to make themselves sound more certain than they are, then I oppose.
Rescue the girl and plan to explain to the wealthy people what happened. Possibly try to bring her with him, for purposes including lending credence to his story.
Indeed. I guessed that 75+% of the time, when I’ve seen someone say “blah blah blah </rant>”, it wasn’t preceded by “<rant>”.
Claude came up with roughly the same number
Q: Some people use “</rant>” in internet conversations. Estimate the percentage of time that it’s preceded by “<rant>”.
A: Based on my observations of internet conversations, I’d estimate that “</rant>” is preceded by an opening “<rant>” tag only about 20-30% of the time.
The use of the HTML end tag implies that this disclaimer would appear after the text it describes. But it seems like it would be best put before the text? (Perhaps this is just another thing that “ideally would be this, but in practice will often be that”?) If the text is a series of chat messages, then, yeah, you may not realize a disclaimer should apply until after you’ve sent the things to which it should apply. But if it’s one big post, then it’s always easy to move it to the top of the post.
After a couple of minutes of poking around, I can’t figure out how to fix it in the interface the page editor gives me, but: The three images on this page in the agree/disagree/Moloch list use a url beginning with localhost:3000, instead of lesswrong.com or a ”//” relative address (which seems most ideal), and thus don’t load for those not running an instance of lesswrong at localhost:3000.
IMO, like almost every social, psychological, and cultural trait, it exists on a continuum.
For natural predispositions, I’m sure that’s true; but to the extent that the trait is a result of learning / training / experience / habit, it’s quite possible for there to be effects that push it towards one extreme or another, resulting in a bimodal distribution.
A category that comes to mind is, if there’s a behavior that people have some normally-distributed natural inclination towards, but is suppressed in most of society, and if there’s a place where that behavior is relatively unsuppressed, then (to the extent that the behavior is important to them) the people with a strong inclination to do it will move to that location, and that place will probably end up tolerating or even supporting it more, and this positive feedback loop can iterate. If the result is stable, then it might form what you could call a culture.
Seems like that depends on details of the problem. If the receptor has zero function, then yes. If functionality is significantly reduced but nonzero… maybe.
I will add that this problem is the most good faith version of the complaints with “woke” media/fiction (the bad faith one being of course people who simply don’t like any progressive ideas at all, no matter how they’re packaged)
I’ll avoid specific examples to reduce the risk of derailing the thread, but I would define “woke” as “prioritizing waging identity-group conflict above other values”. A piece of fiction has many dimensions on which it could be good or bad: novelty, consistency, believability, immersion, likability of characters, depth of characters, emotional range from plot events, predictability of plot, humor, and many more. A woke piece of fiction would be one in which it’s clear that many decisions have been made by a woke ethos, which considers it a good tradeoff to make significant sacrifices on those dimensions in order to advance its preferred identity-group conflict(s); the more woke, the more extreme those sacrifices.
To the react: There were several classes in which I didn’t do the homework, which accounted for something like 15% of the grade, and I got something like 92% on the tests and projects; but since they took away 15% for the homework, the result was 77%, a C instead of an A. To my mind, a grade should be something like the best estimate of a student’s abilities and knowledge of a subject, and since the homework problems don’t measure anything the tests don’t (I’m distinguishing “answer these 5-20 short problems” from a major project), other than obedience, it seems to me inappropriate to mark someone down for not doing the homework.
– eliminate homework and weekly tests from counting toward semester grade
The homework part I would have appreciated. I did not need to do all those homework problems to learn the material, as my test scores proved, and giving me a C because I didn’t do the homework was, in my opinion, lying.
If you come from a drinking culture, then people who refuse to drink with you are obviously the people who want to keep secrets from you, so it makes sense to treat it as a red flag.
Of course, they might also be people who consider themselves high-risk for alcoholism and have chosen abstinence. I hope the drinking culture is able to accommodate them.
how is it possible for them to develop a distinct style? Isn’t that like the only thing they—practically by definition—shouldn’t do?
Consult King James Programming: https://www.tumblr.com/kingjamesprogramming
A much simpler Markov chain, trained apparently on “the King James Bible and the Structure and Interpretation of Computer Programs” (I think I read that “The Art of Unix Programming” or something is also in there). Examples:
then shall they call upon me, but I will not cause any information to be accumulated on the stack.
How much more are ye better than the ordered-list representation
evaluating the operator might modify env, which will be the hope of unjust men
If you imagine that style 1 has traits A1, B1, and C1, and style 2 has traits A2, B2, and C2, then you could end up with style 3 having traits A1, B2, and C1, which is a novel combination. Depending on your criteria of “style”, this might count as a new style. Here it’s pretty clumsy and heavy-handed, and it looks more like switching between style 1 and style 2 (IIRC this particular Markov chain uses the last 3 or 4 words, which is higher than usual and more likely to just reproduce chains of text from the original); but if you imagine there being 100 styles, each having 1000 traits, it seems much more likely that the resulting thing would qualify as a “new style” by a layman’s judgment.
This gets subtle. I can think of several cases where journalists sat on what would have been delicious scandals that should be good for a career, for what look like political reasons. That said, if one looks closer, it’s plausible that, in each case, they reasoned (plausibly correctly) that it would not have actually been good for their career to publish it, because they would have faced backlash (for political/tribal reasons), and possibly their editors (if applicable) would have refused to allow it. I imagine there is partial but incomplete equivalence between this kind of “externally imposed political motivation” versus “internalized political motivation”, and it may be worth tracking the difference.
That’s for omitting stories. For lying… On priors, that difference of external vs internal political motivation would be important: the latter would encourage a journalist to come up with new lies and use them, while the former would mostly just make them go along with lies that the rest of their tribe is already telling. I do see plenty of “going along with lies” and not much innovative mendacity; I’ll note that the “lies” I refer to are usually “not technically false, but cherry-picked and/or misleadingly phrased, which a normal person will hear and predictably come away believing a statement that is false; and which a journalist who felt a strong duty to tell the truth as best they could would not say absent stronger external pressure”. (See Zvi on bounded distrust.)