LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
well I said “harder to fake”, not ironclad or “sufficiently hard to fake.” It’s better than “in private, he said he cared about My Pet Cause”
I do think people sometimes get mad at and vote out politicians that betrayed a principle they care about, esp. if they are a single-issue voter.
Yeah, to be clear I have not thought that hard about how to handle the lawsuits. Even with a functioning lawsuit defense org-thingy, I think Evaluator People will probably need to have courage / conflict-readiness, and part of the post is a call for that.
I think the best model here is a constellation of individuals and micro-orgs, and some donors who are are serious about supporting the entire endeavor (which does unfortunately involve some modeling of the “what counts as the endeavor”).
I find the PDF kinda annoying to read, could we copy it over here?
The watchmen publish their work publicly, people read the writeups and check that they make sense, pretty much anyone can become a watchman if they want. (The whole idea is that the writeups can be private during low-key-fundraising periods but public afterwards so it’s easier to sanity-check. Also, during the lowkey periods, there can be private mailing lists for discussion)
The answer to “who watches the watchmen” is “distributed spotchecks by readers.” Will that be perfect? No. It just has to be good enough to be make it worth making a lot more political donations at scale.
It feels like you’re a) assuming I’m more absolutist about this than I am, b) that I haven’t thought about the stuff you mention here, and I don’t really know why.
Ah, alas. Well I just replaced most of the instances with “bloc” but I’m not sure if the connotations of that are quite right anyway.
I thought “political machine” was a category that included the corrupt bad thing in the wiki article but also included other things, and, man now I’m not sure there’s even a good word for exactly what I want.
In this case, I’m not saying “let’s make an Epistemics Party.” I’m saying, rationalsphere people who agree on political goals should coordinate to achieve those goals as effectively as possible (which includes preserving rationality).
I expect this to look more like supporting ordinary Democrat or Republican candidates (in the US), or otherwise mostly engaging with the existing political apparatus.
I did say roughly that this would happen, the thing I regret was somehow threading the needle on communicating:
“guys, when I say it’s gonna get dark, I’m, like, more serious than usual. But I am also more serious than usual about it being light at the end, when you are evaluating the darkness which is darker than you expect, I will be trying pretty hard to counterbalance that.” (literally those words feels too awkward, but, that’s the vibe I wanted)
I think depends on how much you believe in yourself vs other specific people. I agree with funding other specific people you believe in.
(fixed)
Mmm, I’m thinking of before vaccines came out. I have more thoughts about that but maybe don’t want to make this thread all about that.
I’ve heard similar comments from several people about the afterparty, and regret not spending a lot more time trying to make it a good part of the experience. I think in future years I maybe would prefer the afterparty on Saturday night to be more primarily “for Solstice attendees” and try to make a different night of the weekend the “everyone from all over the extended community comes over.”
(You didn’t mention the decompression zone but I maybe also want to take the opportunity to apologize: I had announced the decompression zone around firepits, but, then it turned out that all the firepits were full of people by the time I got there, and the whole area was so loud it felt hard to do announcements to direct people into the room we found. What I realize now was that I should have put up more/bigger signs about that)
Ah whoops. Fixed.
(Normally this wouldn’t have been that bad a problem since the form itself is private, I happened to make it public to be easier to get feedback on the questions earlier today)
(For people who read it already, I just added an Appendix of Director Commentary. I might add another Appendix B about why I made some of the choices in the event that did get included)
fwiw I think there is a good thing about steelmanning and a different good thing about ITT passing. (Which seems plausibly consistent with Rob’s title ITT-passing and civility are good; “charity” is bad; steelmanning is niche, and also your post title here. I haven’t reread either yet but am responding since I was tagged)
ITT passing is good for making sure you are having a conversation that changes people’s minds, and not getting confused/mislead about what other people believe.
Steelmanning is good for identifying the strongest forms of arguments in a vacuum, which is useful for exploring the argument space but also prone to spending time on something that nobody believes or cares about, which is sometimes worth it and sometimes not. (it also often is part of a process that misleads people about what a person or group believes)
Which of those is more important most of the time? I dunno, the answer is AFAICT “each consideration is important enough to track that you should pay attention to them periodically.” And it feels like attempts to pin this down further feel more like some kind of culture war that isn’t primarily about the object-level fact of how often they are useful.
(apologies if I have missed a major point here, replying quickly at a busy time)
Minor reference that I agree wasn’t worth spelling out in the post but seemed nice to include: A Little Echo is a song I wrote in 2012 as “a cryonics funeral song”, about the various ways that echoes of people can survive.
It hasn’t turned out to be a mainstay Solstice song. I was actually a bit sad that this solstice turned out last-minute-accidentally to be the most cryonics-heavy Solstice I’ve led (as a recurring B Plot), but it didn’t really make sense to do the song because other songs were filling it’s niche as a singalong.
I believe that we will win.
An echo of an old ad for the 2014 US men’s World Cup team. It did not win.
See: @AnnaSalamon’s Believing In.
I’ve recently been meditating on Eliezer’s:
Beliefs are for being true. Use them for nothing else.
If you need a good thing to happen, use a plan for that.
I think Anna Salamon is right that there are two separate things people call beliefs, one of which is about probabilities, and one is about what things you want to invest in.
In one early CFAR test session, we asked volunteers to each write down something they believed. My plan was that we would then think together about what we would see in a world where each belief was true, compared to a world where it was false.
I was a bit flummoxed when, instead of the beliefs-aka-predictions I had been expecting, they wrote down such “beliefs” as “the environment,” “kindness,” or “respecting people.” At the time, I thought this meant that the state of ambient rationality was so low that people didn’t know “beliefs” were supposed to be predictions, as opposed to group affiliations.
I’ve since changed my mind. My new view is that there is not one but two useful kinds of vaguely belief-like thingies – one to do with predictions and Bayes-math, and a different one I’ll call “believing in.” I believe both are lawlike, and neither is a flawed attempt to imitate/parasitize the other. I further believe both can be practiced at once – that they are distinct but compatible.
I’ll be aiming, in this post, to give a clear concept of “believing in,” and to get readers’ models of “how to ‘believe in’ well” disentangled from their models of “how to predict well.”
I think it’s a dangling thread of rationality discourse of how to fully integrate Believing In. Fortunately, it’s The Review Season and it’s a good time to back to the Believing In post and review it.
On thing to note is that “short reviews” in the nomination phase are meant to be basically a different type of object than “effort reviews.” Originally we actually had a whole different data-type for them (“nominations”), but it didn’t seem worth the complexity cost.
And then, separately: one of the points of the review is just to track “did anyone find this actually helpful?” and a short review that’s like “yep, I did in fact use this concept and it helped me, here’s a few details about it” is valuable signal.
Drive by “this seems false, because [citation]” also good.
It is nice to do more effortful reviews, but I definitely care about those types of short reviews.
Thanks!
The reason I asked you to write some-version-of-this is, I have in fact noticed myself veering towards a certain kind of melodrama about the whole x-risk thing, and I’ve found various flavors of your “have you considered just… not doing that?” to be helpful to me. “Oh, I can just choose to not be melodramatic about things.”
(on net I am still fairly relatively dramatic/narrative-shaped as rationalists go, but, I’ve deliberately tuned the knob in the other direction periodically and think various little bits of writing of yours has helped me)
I liked the framing you did at Solstice of it as a general prompt to treat it as a skill issue without being about the exact recipe.
Yeah I do not think it is good to format that in blockquotes without spelling out that it is a paraphase (in my original I say “something like ‘Anthropic wants...’”)
I got positive feedback about it working for people who previously hadn’t been into group singing, and the “One Shot Singing” segment actually is in the top 10 setlist elements according to the ratings, which is pretty high for a meta-instructional segment.
https://secularsolstice.vercel.app/programs/cd5573d9-b3fe-4f16-ae01-09a0cbc8f931/results
My impression is it worked pretty well, although I think of this as a multi-year project that will require followup to solidify.