Hello! I work at Lightcone and like LessWrong :-). I have made some confidentiality agreements I can’t leak much metadata about (like who they are with). I have made no non-disparagement agreements.
kave
It sounds like you’re talking about the standards for frontpaging rather than quick take vs post?
What did you update your model to?
Noted. I’m not planning to revert the change, but I will try and track this cost.
FWIW, I think you might suffer less from this than you think. I believe every quick take I removed from the frontpage today was made after a post on the topic had been made, and, in most or all cases, after the post had been officially moved to personal.
(EDIT: or, perhaps, the conclusion I should draw from my previous paragraph is that adding this feature won’t help you that much, because the distribution of tag filters among the user base will mean few enough people see and upvote the quick take that it won’t appear on the frontpage for you)
It would be good to have this feature, but we don’t yet
There has been a rash of highly upvoted quick takes recently that don’t meet our frontpage guidelines. They are often timely, perhaps because they’re political, pitching something to the reader or inside baseball. These are all fine or even good things to write on LessWrong! But I (and the rest of the moderation team I talked to) still want to keep the content on the frontpage of LessWrong timeless.
Unlike posts, we don’t go through each quick take and manually assign it to be frontpage or personal (and posts are treated as personal until they’re actively frontpaged). Quick takes are instead treated more like frontpage by default, but we do have the ability to move them to personal.
I’m writing this because of a bunch of us are planning to be more active about moving quick takes off the frontpage. I also might link to this comment to clarify what’s happening in cases of confusion.
On your example: confidence intervals on ranking seem like quite a strange beast; it seems like you would get it by something like interval arithmetic on your confidence intervals for a scalar rating.
I think retiring is hard for lots of people cos they don’t really change their minds about this
I’d be interested to read the full transcript. Is that available anywhere? Sorry if I missed it
Yup. The missing assumption is that setting up and running experiments is inside the funny subset, perhaps because it’s fairly routine
A version of the argument I’ve heard:
AI can do longer and longer coding tasks. That makes it easier for AI builders to run different experiments that might let them build AGI. So either it’s the case that both (a) the long-horizon coding AI won’t help with experiment selection at all and (b) the experiments will saturate the available compute resources before they’re helpful; or, long-horizon coding AI will make strong AI come quickly.
I think it’s not too hard to believe (a) & (b), fwiw. Randomly run experiments might not lead to anyone figuring out the idea they need to build strong AI.
Mod here. This post violates our LLM Writing Policy for LessWrong, so I have delisted the post, so it’s only accessible via link. I’ve not returned it to the user’s drafts, because that would make the comments hard to access.
@sdeture, we’ll remove posting permissions if you post more direct LLM output.
I think the larger effect is treating the probabilities as independent when they’re not.
Suppose I have a jar of jelly beans, which are either all red, all green or all blue. You want to know what the probability of drawing 100 blue jelly beans is. Is it ? No, of course not. That’s what you get if you multiply 1⁄3 by itself 100 times. But you should condition on your results as you go. P(jelly1 = blue)⋅P(jelly2=blue|jelly1=blue)⋅P(jelly3=blue|jelly1=blue,jelly2=blue) …
Every factor but the first is 1, so the probability is .
I know many of you folks care a lot about how AI goes. I’m curious how you connect that with – or actively disconnect that from – the new workshops.
The question I’m most interested in: do you have a set of values you intend the workshops to do well by, that don’t involve AI, and that you don’t intend to let AI pre-empt?[1][2]
I’m also interested in any thinking you have about how the workshops support the role of x-risk, but if I could pick one question, it’d be the former.
It looks like last year it was Fall, and the year before it was Autumn.
I agree, but that’s controlled by your browser, and not something that (AFAIK) LessWrong can alter. On desktop we have the TOC scroll bar, that shows how far through the article you are. Possibly on mobile we should have a horizontal scroll bar for the article body.
Open Thread Autumn 2025
(I think, by ‘positive’, Ben meant “explain positions that the group agrees with” rather than “say some nice things about each group”)
I thought Richard was saying “why would the [thing you do to offset] become worth it once you’ve done [thing you want to offset]? Probably it’s worth doing or not, and probably [thing you want to offset] is bad to do or fine, irrespective of choosing the other”
One issue for me: I don’t want to spend that much time reading text where most of the content didn’t come from a human mind. If someone used a bunch of LLM, that makes the contentful stuff less likely to be meaningful. So I want to make use of quick heuristics to triage
I agree about the long delay in frontpaging, so it’s been one of my side projects to get that time down. I’ve trained a logistic classifier to predict the eventual destination of a post, and currently mods are seeing those predictions when they process posts. If the predictions perform well for awhile, we’ll have them go live and review the classification retrospectively