Hello! I work at Lightcone and like LessWrong :-). I have made some confidentiality agreements I can’t leak much metadata about (like who they are with). I have made no non-disparagement agreements.
kave
The predicted winners for future years of the review are now visible on the Best of LessWrong page! Here are the top ten guesses for the currently ongoing 2024 review:
(I’ve already voted on several of these! I doctored the screenshot to hide my votes)
I think LessWrong’s annual review is better than karma at finding the best and most enduring posts. Part of the dream for the review prediction markets is bringing some of that high-quality signal from the future into the present. That signal is currently highlighted with gold karma on the post item, if the prediction market has a high enough probability.
Currently the markets are pretty thinly traded, but I think they already have decent signal. They could do a lot better, I think, with a little more smart trading. It would be a nice bonus if this UI attracted a bit more betting.
Hopefully coming soon: a tag on the markets which indicates which year review they’ll be in, to make it a bit easier for consistency traders to make their bag.
Human intelligence amplification is very important. Though I have become a bit less excited about it lately, I do still guess it’s the best way for humanity to make it to a glorious destiny. I found that having a bunch of different methods in one place organised my thoughts, and I could more seriously think about what approaches might work.
I appreciate that Tsvi included things as “hard” as brain emulation and as soft as rationality, tools for thought and social epistemology.
I liked this post. I thought it was interesting to read about how Tobes’ relation to AI changed, and the anecdotes were helpfully concrete. I could imagine him in those moments, and get a sense of how he was feeling.
I found this post helpful for relating to some of my friends and family as AI has been in the news more, and they connect it to my work and concerns.A more concrete thing I took away: the author describing looking out of his window and meditating on the end reaching him through that window. I find this a helpful practice, and sometimes I like to look out of a window and think about various endgames and how they might land in my apartment or workplace or grocery store.
I’m a big fan of this series. I think that puzzles and exercises are undersupplied on LessWrong, especially ones that are fun, a bit collaborative and a bit competitive. I’ve recently been trying my hand at some of the backlog, and it’s been pretty cool. I can feel that I’m getting at least a bit better at compressing the dimensionality of the data as I investigate it.
In general, I’d guess that data science is a pretty important epistemological skill. I think LessWrongers aren’t as strong in it as they ideally would be. This is in part because of a justified suspicion that people just pour in data and confusion, and get out more official-looking confusion. I’d say that a central point of this series is: how do you avoid confusing yourself with data by actually thinking about things?
I have the impression that I reach for this rule fairly frequently. I only ontologise it as a rule to look out for because of this post. (I normally can’t remember the exact number, so have to go via the compound interest derivation).
Open Thread Winter 2025/26
(My plus is conditional on me not being the adjudicator)
SFF is matching donations on some orgs through the end of 2025 (see the list), which signals which orgs they want more people to donate to.
As I work for an org which receives matching, I think it’s important to note this has nothing to do with which orgs SFF likes best.
When you apply for an SFF grant, you can opt into receiving some of your funds as matching pledges. That gives you more weight in the S-Process algorithm; the S-Process treats it like it’s able to give you >$1 per dollar it spends.
So it’s just about what orgs felt would be best for their fundraising, not endorsement from the SFF.
(See more here)
Fire alarms basically don’t help with fire deaths at all
Is that true? I don’t think there’s amazing evidence, but my sense is that it’s sufficient to expect fire alarms help. I think the study designs look like:
Difference in fire deaths when there are working fire alarms. To help with confounders, they try and do within neighbourhood comparison. Obviously that won’t be sufficient, but still
Do a campaign in a neighbourhood to get people to install fire alarms, and see the before and after of fire deaths. Unfortunately, I think most campaigns also include education, and at least increase the saliency of fire, so that makes it harder to tell
Tending to see things like earlier evacuation with fire alarms, and an association between evacuation time and mortality
*Lighthaven->Lightcone (at least in the case of SFF matching)
Reading the May 14 update, it looks like it describes adding the last paragraph of Habryka’s opening blockquote. If that’s right, he goes on to describe why this exclusion wouldn’t trigger here.
Related: The Middleman Economy (book) and EconTalk episode.
To tie the two comments together explicitly: an n-person meeting, with each person giving an update, requires nx clock-minutes, and each clock minute spends n person-minutes, leading to (n^2)x person-minutes in an n-person meeting.
You’re looking for this section, btw
user:annasalamon burning man. Sadly, it doesn’t like we support quotes withuser:searches. I’ll file a bug about it
I think most of the value is in the publishing, rather than the amount of content. I think probably the word minimum should be less. If I look through world spirit sock puppet (IMO a great blog), the majority of the posts seem to be <500 words. It’s possible that it’s just too hard to police quality, or at least effort, with shorter posts. In that case, maybe it’s worth increasing the proof of work.
If the oxytocin receptor doesn’t work, this won’t do much
(FWIW, I’ve referenced that post 2-4 times since it was posted)
The biggest surprise to me was that every company was not already doing this—isn’t it the obvious thing to do? WTF do they teach in business school?
I wonder how this system would perform if charged with all the overhead costs for implementing such fine-grained tracking? Seems pretty tricky to answer I guess. It requires the counterfactual of not using the system at all.
Thank you!
Would it help if the prompt read more like a menu?