LessWrong dev & admin as of July 5th, 2022.
RobertM
Calling your Senator’s office is probably the cheapest effective thing you can do here, yeah.
Whoops, yes, thanks, edited.
Briefly analyzing the 10-year moratorium amendment
Curated. While I don’t agree with every single positive claim advanced in the post (in particular, I’m less confident that chain-of-thought monitoring will survive to be a useful technique in the regime of transformative AI), this is an excellent distillation of the reasons for skepticism re: interpretability as a cure-all for identifying deceptive AIs. I also happen to think that those reasons generalize to many other agendas.
Separately, it’s virtuous to publicly admit to changing one’s mind, especially when the incentives are stacked the way they are—given Neel’s substantial role in popularizing interpretability as a research direction, I can only imagine this would have been harder for him to write than for many other people.
Hey Shannon, please read our policy on LLM writing before making future posts consisting almost entirely of LLM-written content.
Curated. To the extent that we want to raise the sanity waterline, or otherwise improve society’s ability to converge on true beliefs, it’s important to understand the weaknesses of existing infrastructure. Being unable to reliably translate a prediction market’s pricing directly into implied odds of an outcome seems like a pretty substantial weakness. (Note that I’m not sure how much I believe the linked tweet; nonetheless I observe that the odds sure did seem mispriced and the provided explanation seems sufficient to cause some mispricings sometimes.) “Acceptance is the first step,” and all that.
@Dima (lain), please read our policy on LLM writing on LessWrong and hold off on submitting further posts until you’ve done that.
Also also, why are socialist-vibe blogposts so often relegated to “personal blogpost” while capitalist-vibe blogposts aren’t? I mean, I get the automatic barrage of downvotes, but you’d think the mods would at least try to appear impartial.
Posts are categorized as frontpage / personal once or twice per day, and start out as personal by default. Your post hasn’t been looked at yet. (The specific details of what object-level political takes a post has aren’t an input to that decision. Whether a post is frontpaged or not is a function of its “timelessness”—i.e. whether we expect people will still find value in reading the post years later—and general interest to the LW userbase.)
“The Urgency of Interpretability” (Dario Amodei)
Sorry, there was a temporary bug where we were returning mismatched reward indicators to the client. It’s since been patched! I don’t believe anybody actually rolled The Void during this period.
Sorry, there was a temporary bug where we were returning mismatched reward indicators to the client. It’s since been patched! I don’t believe anybody actually rolled The Void during this period.
Pico-lightcone purchases are back up, now that we think we’ve ruled out any obvious remaining bugs. (But do let us know if you buy any and don’t get credited within a few minutes.)
If you had some vague prompt like “write an essay about how the field of alignment is misguided” and then proofread it you’ve met the criteria as laid out.
No, such outputs will almost certainly fail this criteria (since they will by default be written with the typical LLM “style”).
“10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?
In addition to the objection from Archimedes, another reason this is unlikely to be true is that 10x coders are often much more productive than other engineers because they’ve heavily optimized around solving for specific problems or skills that other engineers are bottlenecked by, and most of those optimizations don’t readily admit of having an LLM suddenly inserted into the loop.
Not at the moment, but it is an obvious sort of thing to want.
Thanks for the heads up, we’ll have this fixed shortly (just need to re-index all the wiki pages once).
Eliezer’s Lost Alignment Articles / The Arbital Sequence
Arbital has been imported to LessWrong
Curated. This post does at least two things I find very valuable:
Accurately represents differing perspectives on a contentious topic
Makes clear, epistemically legible arguments on a confusing topic
And so I think that this post both describes and advances the canonical “state of the argument” with respect to the Sharp Left Turn (and similar concerns). I hope that other people will also find it helpful in improving their understanding of e.g. objections to basic evolutionary analogies (and why those objections shouldn’t make you very optimistic).
For the first, we have the Read History page. For the second, there are some recommendations underneath the comments section of each post, but they’re not fully general. For the third—do you mean allowing authors on LessWrong to have paid subscribers?