I think quantifying how strong a particular signal is will be an important part of having a self-correcting political coalition. My guess is that public statements of this kind are epsilon, but I’m open to data I’m wrong.
Elizabeth
An example of a harder password to fake is “I have made many public statements about my commitments that would look bad for me if I betrayed them.”
How bad is this, actually? My impression is that everyone complains about politicians lying but in practice in doesn’t cost you much. E.g. no one says “well I otherwise like him but he betrayed this position I hate, so I won’t support him”
The effect isn’t instantaneous, right? Is a moment in the HVAC system enough to kill them?
viruses are much more vulnerable than skin bacteria, although that doesn’t rule out microbiome damage entirely.
I absolutely love that solstice has a different vision every year and feel angry at hypothetical guys who expect every year to be for them. I’d rather see it executed excellently for some people than an even mediocrity.
But then a year later a lot of people seemed stuck in kind of low-agency highly risk averse group settings that were (probably) too conservative
Availability bias and all that, but my impression is rationalists were unusually fast to change norms after vaccines came out, compared to other people who took covid very seriously.
I agree with you, and also, the rationalist community seems unusually willing to buy things from friends with money. Bountied rationality exists. I know more than one couple that will pay each other to do things one wants but the other finds unpleasant. The “happy price” meme encourages people to ask for much more money and avoid a social obligation to give a friend price (which is kind of a gift).
the median self-identified earner-to-give only donates about 5% of their income (IIRC, I can’t find the data now)
isn’t that brought down by students?
reason 16: you hate board games
#sixseasonsandamovie
The only knowledge “You might enjoy playing it” is displaying is that you know they like board games, which is not enough to be meaningful. In order to qualify as knowing the person it needs to be much more specific. And indeed, I value “you should read this book in particular because of x, y, z” much more than the median book someone might buy me.
To rephrase more neutrally: there’s a trade off between optionality and the opportunities that can only be unlocked through long commitment (analogous to rabbit hunters vs stag hunters, but over a prolonged period). Assume there’s a Pareto frontier and one’s position on it is morally neutral: high-commiters/stag-hunters are still better off if they pair with each other than with high-optionality-types/rabbit-hunters (although the reverse is much less true). It sucks wanting to stag hunt when everyone around you wants rabbits. Monogamy can be useful as a costly signal of “I want to stag hunt” even for someone who would be fine being poly with another stag-hunter.
I can think of at least one friend who self-describes as not feeling jealousy and being more naturally poly, but chose monogamy for basically this reason.
REASONS BESIDES JEALOUSY TO NOT BE POLYAMOROUS
Recently Amanda Bethlehem published comparing monogamous jealousy to kidney disease. Eneasz Brodski doubled down on this. I disagree with a lot of their implications, but today I’m going to focus on the implicit claim that jealousy is the only reason to be monogamous. Here is a list of other reasons you might choose monogamy:
Your sex and romance needs are satisfied by one person, and extra isn’t that valuable to you (aka “polysaturated at 1”, or in the case of one guy I know, at 0)
You + your partner are capable of allowing cuddling with friends and friendship with exes without needing to make everything allowed.
You are busy.
You hate coordination labor/value simplicity. I have been with my partner for over 4 years and we have never shared calendars. He knows when my D&D night is and I know when his house meeting is, and otherwise we work it out over text. The only reason sharing calendars would occur to me is exposure to poly.
You hate the idea of people you didn’t choose having a significant effect on your life.
Or maybe you’re conceptually fine with that, but you/your partner are bad at that kind of coordination
You are immunocompromised. STDs get all the attention, but poly also facilities transmission of simple respiratory illnesses. Your primary’s close secondary’s tertiary relationship is probably not going to give up concerts even if colds are devastating to you.
You take breakups really hard. Maybe to the point that dating isn’t worth it for you, or maybe it’s worth it for you but only because your partner supports you through it and they’re not up for 4 months of avoidable depression per year.
You landed one great primary relationship but have shit taste in general. If you date, it will be a shitshow.
You really want the benefits of a strong primary/mono relationship, and are incapable of keeping secondary relationships small. The easiest way to protect your relationship is to not start another.
Your job requires a clean public image and you value that more than extra relationships
You lack the communication skill to do it well, and developing it is not your top priority.
Having your partner as your only source of sex/romance makes you try harder in ways you endorse.
Maybe you’ll be poly someday but right now you want to establish security with each other.
You could get over your jealousy but other emotional growth opportunities are higher priority.
It’s a tenet of LessWrong that factual content and emotional valence are separate axes. Or more plainly, disagreeing on a matter of fact never makes you an asshole, but delivery can.
Is it possible to take actions that cause people to dismiss something, without being sneering?
Could you define sneering, as you use it? It sounds to me like you mean something like “dismissing in entirety”, which is not my definition.
The Boring Part of Bell Labs
Giving attention to sneering comments that happen to bubble to your attention isn’t Pareto optimal on any front. If you want to learn where you are wrong, seek out the most insightful people who disagree with you (and not just the ones that use long essays to lay out their case logically).
Back in 2020, @Raemon gave me some extremely good advice.
@johnswentworth had left some comments on a post of mine that I found extremely frustrating and counterproductive. At the time I had no idea about his body of work, so he was just some annoying guy. Ray, who did know who John was and thought he was doing important work, told me:
You can’t save the world without working with people at least as annoying as John.
Which didn’t mean I had to heal the rift with John in particular, but if I was going to make that a policy then I would need to give up on my goal of having real impact.
John and I did a video call, and it went well. He pointed out a major flaw in my post, I impressed him by immediately updating once he pointed it out. I still think his original comments displayed status dynamics while sneering at them, and find that frustrating, but Ray was right that not all factual corrections will be delivered in pleasing forms.
Epistemic status: puzzling something out, very uncertain, optimizing for epistemic legibility so I’m easy to argue with. All specific numbers are ass pulls]
In my ideal world, the anti-X-risk political ecosystem has an abundance of obviously high quality candidates. This doesn’t appear to be on the table.
Adjusting for realism, my favorite is probably “have a wide top of funnel for AI safety candidates, narrow sharply as they have time to demonstrate traits we can judge them on”. But this depends on having an ecosystem with good judgement, and I’m not sure that that’s actually realistic.
I think it’s probably pretty easy to identify the top political candidates. These are the people like Alex Bores, who have a track record of getting hard legislation through their legislature and leave public evidence of strongly understanding the issue TODO LINK. If I had to put a number on it I’d count us as indescribably lucky to have 5% of candidates be this obviously good.
It’s also pretty easy to identify the worst candidates, by track record or by the financial support of pro-AI PACs. Let’s optimistically put this at the bottom 50% of candidates.
I don’t expect getting the obviously great 5% elected/appointed to be enough to win on AI safety issues. I also low-confidence expect supporting the middle 45% uniformly to be worse than doing nothing. And not just because some of them are lying- but because useful legislation is such a narrow target, and lots of people mean well without having the skill to actually be helpful. There’s also the negative externalities of crowding out better candidates. Say we need the 80-95th percentile candidates to win. A candidate who competes for the same support, who is well meaning but less competent than the 80th percentile, is actively taking away from better candidates.*
[*You can come up with math where this is okay if resources are abundant and the lesser candidates are merely less good rather than actively bad, but I expect resources to be tight and doing more harm than good to be easy. You can also solve a lot of this problem with web-of-trust, but we need something that will scale]
In this world, it becomes critical to distinguish 80-95th percentile candidates from 50th-80th. Let’s optimistically assume this can happen, even if it hasn’t yet. In that exact state, how should I feel about assisting a 0-track record candidate? Or maybe, how should someone with no budget constraints think about it?
Logically, my guess is that starting to lay down fertilizer so the judges have something to judge once they’re in place is net helpful. Intuitively, I feel the opposite. If forced to justify this I will say things like “having a bunch of mid people with things to lose make it harder to create the skillful judging system”, but these might be rationalizations.
Some cruxes in this model:
Being net beneficial on AI safety via government work is a very narrow target.
It is easy to do net harm to your sincerely held goals.
If there is a money firehose, people will mouth the words needing to access it, regardless of their actual beliefs or intentions.
^ has significant costs beyond the mere loss of money.
Judging the impact of politicians and appointees is extremely difficult even when you have all the information.
Most of the relevant information will be hidden.
Bringing on too many candidates too quickly, before a judgement apparatus is set up, will harm the ability to set up the judgement apparatus.