Here is a list of all my public writings and videos.
If you want to do a dialogue with me, but I didn’t check your name, just send me a message instead. Ask for what you want!
Here is a list of all my public writings and videos.
If you want to do a dialogue with me, but I didn’t check your name, just send me a message instead. Ask for what you want!
Fascinating. You’re one of the names on Less Wrong that I associate with positive, constructive dialogue. We may have a scissor statement here.
I appreciate your earnest attempt to understand what I’m writing. I don’t think “weirdos/normies” nor “Critical thinkers/uncritical thinkers” quite point at what I’m trying to point at with “independent/conventional”.
“Independent/dependent” is about whether what other people think influences you to reach the same conclusions as other people. “Weirdos/normies” is about whether you reach the same conclusions as other people. In other words, “weirdos/normies” is correlation. “Independent/dependent” is causation in a specific direction. Independent tends to correlate with weirdo, and dependent tends to correlate with normie, but it’s possible to have either one without the other.
You are correct that critical thinkers may want to censor uncritical thinkers. However, independent-minded thinkers do not want to censor conventional-minded thinkers.
I appreciate your compliment too.
Your comment is not a censure of me.
I didn’t feel the need to distinguish between censorship of ideas and censorship of independent-minded people, because censorship of ideas censors the independent-minded.
give enough examples to know what kind of exceptions to look for
I deliberately avoided examples for the same reason Paul Graham’s What You Can’t Say deliberately avoids giving any specific examples: because either my examples would be mild and weak (and therefore poor illustrations) or they’d be so shocking (to most people) they’d derail the whole conversation.
Did you read the Paul Graham article I linked? Do you disagree with it too?
Independent-mindedness is multi-dimensional. You can be more independent-minded in one domain than another.
I made my November 20, 2023 08:58:05 UTC post between the dip and the recovery.
November 20, 2023 19:54:45 UTC
Result: Microsoft has gained approximately $100B in market capitalization.
November 20, 2023 08:58:05 UTC
If my phone wasn’t broken right now I’d be creating a Robinhood (or whatever) account so I can long Microsoft. Ideally I’d buy shares, but calls (options to buy) are fine.
Why? Because after the disaster at OpenAI, Satya Nadella just hired Sam Altman to work for Microsoft directly.
My deontological terminal value isn’t to causally win. It’s for FTD agents to acausally lose. Either I win, or the FDT agents abandon FDT. (Which proves that FDT is an exploitable decision theory.)
I’m not sure I see the pathological case of the problem statement: an agent has utility function of ‘Do worst possible action to agents who exactly implement (Specific Decision Theory)’ as a problem either. Do you have a specific idea how you would get past this?
There’s a Daoist answer: Don’t legibly and universally precommit to a decision theory.
But the exploit I’m trying to point to is simpler than Daoist decision theory. Here it is: Functional decision theory conflates two decisions:
Use FDT.
Determine a strategy via FDT.
I’m blackmailing contingent on decision 1 and not on decision 2. I’m not doing this because I need to win. I’m doing it because I can. Because it puts FDT agents in a hilarious lose-lose situation.
The thing FDT disciples don’t understand is that I’m happy to take the scenario where FDT agents don’t cave to blackmail. Because of this, FDT demands that FDT agents cave to my blackmail.
Correct. The last time I was negotiating with a self-described FDT agent I did it anyway. 😛
My utility function is “make functional decision theorists look stupid”, which I satisfy by blackmailing them. Either they cave, which mean I win, or they don’t cave, which demonstrates that FDT is stupid.
Not if you do it anyway.
There’s a couple different ways of exploiting an FDT agent. One method is to notice that FDT agents have implicitly precommitted to FDT (rather than the theorist’s intended terminal value function). It’s therefore possible to contrive scenarios in which those two objectives diverge.
Another method is to modify your own value function such that “make functional decision theorists look stupid” becomes a terminal value. After you do that, you can blackmail them with impunity.
FDT is a reasonable heuristic, but it’s not secure against pathological hostile action.
I’m not sure if this is the right course of action. I’m just thinking about the impact of different voting systems on group behavior. I definitely don’t want to change anything important without considering negative impacts.
But I suspect that strong downvotes might quietly contribute to LW being more group thinky.
Consider a situation where a post strongly offends a small number of LW regulars, but is generally approved of by the median reader. A small number of regulars hard downvote the post, resulting in a suppression of the undesirable idea.
I think this is unhealthy. I think a small number of enthusiastic supporters should be able to push an idea (hence allowing strong upvotes) but that a small number of enthusiastic detractors should not be able to suppress a post.
For LW to do it’s job, posts must be downvoted because they are poorly-reasoned and badly-written.
I often write things which are badly written (which deserve to be downvoted) and also things which are merely offensive (which should not be downvoted). [I mean this in the sense of promoting heretical ideas. Name-calling absolutely deserves to be downvoted.] I suspect that strong downvotes are placed more on my offensive posts than my poorly-written posts, which is opposite the signal LW should be supporting.
There is a catch: abolishing strong downvotes might weaken community norms and potentially allow posts to become more political/newsy, which we don’t want. It may also weaken the filter against low quality comments.
Though, perhaps all of that is just self-interested confabulation. What’s really bothering me is that I feel like my more offensive/heretical posts get quickly strong downvoted by what I suspect is a small number of angry users. (My genuinely bad posts get soft downvoted by many users, and get very few upvotes.)
In the past, this has been followed by good argument. (Which is fine!) But recently, it hasn’t. Which makes me feel like it’s just been driven out of anger and offense i.e., a desire to suppress bad ideas rather than untangle why they’re wrong.
This is all very subjective and I don’t have any hard data. I’ve just been getting a bad feeling for a while. This dynamic (if real) has discouraged me from posting my most interesting (heretical) ideas on LW. It’s especially discouraged me from questioning the LW orthodoxy in top-level posts.
Soft downvotes make me feel “this is bad writing”. Strong downvotes make me feel “you’re not welcome here”.
That said, I am not a moderator. (And, as always, I appreciate the hard work you do to keep things wells gardened.) It’s entirely possible that my proposal has more negative effects that positive effects. I’m just one datapoint.
Proposal: Remove strong downvotes (or limit their power to −3). Keep regular upvotes, regular downvotes, and strong upvotes.
Variant: strong downvoting a post blocks that user’s posts from appearing on your feed.
The decline of dueling coincided with firearms getting much more reliable. Duels should have the possibility of death, but should not (usually) be “to the death”.
Great digest, as always. My favorite parts were the link to the US census policy explanation and the reminder that most people don’t distinguish between choices and mandates.
This was a fun survey!