Sum­mary: “How to Write Quickly...” by John Wentworth

Pablo RepettoApr 11, 2022, 11:26 PM
4 points
0 comments2 min readLW link
(pabloernesto.github.io)

Ram­bling thoughts on hav­ing mul­ti­ple selves

cranberry_bearApr 11, 2022, 10:43 PM
15 points
1 comment3 min readLW link

An AI-in-a-box suc­cess model

azsantoskApr 11, 2022, 10:28 PM
16 points
1 comment10 min readLW link

The Reg­u­la­tory Op­tion: A re­sponse to near 0% sur­vival odds

Matthew LowensteinApr 11, 2022, 10:00 PM
46 points
21 comments6 min readLW link

The Effi­cient LessWrong Hy­poth­e­sis—Stock In­vest­ing Competition

MrThinkApr 11, 2022, 8:43 PM
30 points
35 comments2 min readLW link

Re­view: Struc­ture and In­ter­pre­ta­tion of Com­puter Programs

L Rudolf LApr 11, 2022, 8:27 PM
17 points
9 comments10 min readLW link
(www.strataoftheworld.com)

[Question] Un­der­ap­pre­ci­ated con­tent on LessWrong

Ege ErdilApr 11, 2022, 5:40 PM
22 points
5 comments1 min readLW link

Edit­ing Ad­vice for LessWrong Users

JustisMillsApr 11, 2022, 4:32 PM
234 points
14 comments6 min readLW link1 review

Post-his­tory is writ­ten by the martyrs

VeedracApr 11, 2022, 3:45 PM
50 points
2 comments19 min readLW link
(www.royalroad.com)

What Chords Do You Need?

jefftkApr 11, 2022, 3:00 PM
11 points
0 comments3 min readLW link
(www.jefftk.com)

What can peo­ple not smart/​tech­ni­cal/​”com­pe­tent” enough for AI re­search/​AI risk work do to re­duce AI-risk/​max­i­mize AI safety? (which is most peo­ple?)

Alex K. Chen (parrot)Apr 11, 2022, 2:05 PM
7 points
3 comments3 min readLW link

Good­hart’s Law Causal Diagrams

Apr 11, 2022, 1:52 PM
35 points
6 comments6 min readLW link

China Covid Up­date #1

ZviApr 11, 2022, 1:40 PM
88 points
22 comments3 min readLW link
(thezvi.wordpress.com)

ACX Meetup Copen­hagen, Denmark

Søren ElverlinApr 11, 2022, 11:53 AM
4 points
0 comments1 min readLW link

Is it time to start think­ing about what AI Friendli­ness means?

Victor NovikovApr 11, 2022, 9:32 AM
18 points
6 comments3 min readLW link

[Question] Is there an equiv­a­lent of the CDF for grad­ing pre­dic­tions?

Optimization ProcessApr 11, 2022, 5:30 AM
6 points
5 comments1 min readLW link

[Question] Im­pact­ful data sci­ence projects

Valentin2026Apr 11, 2022, 4:27 AM
5 points
2 comments1 min readLW link

[Question] Could we set a re­s­olu­tion/​stop­per for the up­per bound of the util­ity func­tion of an AI?

FinalFormal2Apr 11, 2022, 3:10 AM
−5 points
2 comments1 min readLW link

Epistemic Slipperiness

RaemonApr 11, 2022, 1:48 AM
59 points
18 comments7 min readLW link

[Question] What is the most effi­cient way to cre­ate more wor­lds in the many wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics?

seankApr 11, 2022, 12:26 AM
4 points
11 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

YitzApr 10, 2022, 9:02 PM
92 points
141 comments2 min readLW link

Emo­tion­ally Con­fronting a Prob­a­bly-Doomed World: Against Mo­ti­va­tion Via Dig­nity Points

TurnTroutApr 10, 2022, 6:45 PM
154 points
7 comments9 min readLW link

[Question] Does non-ac­cess to out­puts pre­vent re­cur­sive self-im­prove­ment?

Gunnar_ZarnckeApr 10, 2022, 6:37 PM
15 points
0 comments1 min readLW link

A Brief Ex­cur­sion Into Molec­u­lar Neuroscience

JanApr 10, 2022, 5:55 PM
48 points
8 comments19 min readLW link
(universalprior.substack.com)

Fi­nally En­ter­ing Alignment

Ulisse MiniApr 10, 2022, 5:01 PM
80 points
8 comments2 min readLW link

Schel­ling Meetup Toronto

Sean AubinApr 10, 2022, 1:58 PM
3 points
0 comments1 min readLW link

Is Fish­e­rian Ru­n­away Gra­di­ent Hack­ing?

Ryan KiddApr 10, 2022, 1:47 PM
15 points
6 comments4 min readLW link

Worse than an un­al­igned AGI

ShmiApr 10, 2022, 3:35 AM
−1 points
11 comments1 min readLW link

Time-Time Tradeoffs

Orpheus16Apr 10, 2022, 2:33 AM
18 points
1 comment3 min readLW link
(forum.effectivealtruism.org)

Bos­ton Con­tra: Fully Gen­der-Free

jefftkApr 10, 2022, 12:40 AM
3 points
12 comments1 min readLW link
(www.jefftk.com)

[Question] Hid­den com­ments set­tings not work­ing?

TLWApr 9, 2022, 11:15 PM
4 points
2 comments1 min readLW link

God­shat­ter Ver­sus Leg­i­bil­ity: A Fun­da­men­tally Differ­ent Ap­proach To AI Alignment

LukeOnlineApr 9, 2022, 9:43 PM
15 points
14 comments7 min readLW link

A con­crete bet offer to those with short AGI timelines

Apr 9, 2022, 9:41 PM
199 points
120 comments5 min readLW link

New: use The Non­lin­ear Library to listen to the top LessWrong posts of all time

KatWoodsApr 9, 2022, 8:50 PM
39 points
9 comments8 min readLW link

140 Cog­ni­tive Bi­ases You Should Know

André FerrettiApr 9, 2022, 5:15 PM
8 points
7 comments1 min readLW link

Strate­gies for keep­ing AIs nar­row in the short term

RossinApr 9, 2022, 4:42 PM
9 points
3 comments3 min readLW link

Hyper­bolic takeoff

Ege ErdilApr 9, 2022, 3:57 PM
18 points
7 comments10 min readLW link
(www.metaculus.com)

Elicit: Lan­guage Models as Re­search Assistants

Apr 9, 2022, 2:56 PM
71 points
6 comments13 min readLW link

Emer­gent Ven­tures/​Sch­midt (new grantor for in­di­vi­d­ual re­searchers)

gwernApr 9, 2022, 2:41 PM
21 points
6 comments1 min readLW link
(marginalrevolution.com)

AI safety: the ul­ti­mate trol­ley problem

chaosmageApr 9, 2022, 12:05 PM
−21 points
6 comments1 min readLW link

AMA Con­jec­ture, A New Align­ment Startup

adamShimiApr 9, 2022, 9:43 AM
47 points
42 comments1 min readLW link

[Question] What ad­vice do you have for some­one strug­gling to de­tach their grim-o-me­ter?

Zorger74Apr 9, 2022, 7:35 AM
6 points
3 comments1 min readLW link

[Question] Can AI sys­tems have ex­tremely im­pres­sive out­puts and also not need to be al­igned be­cause they aren’t gen­eral enough or some­thing?

WilliamKielyApr 9, 2022, 6:03 AM
6 points
3 comments1 min readLW link

Buy-in Be­fore Randomization

jefftkApr 9, 2022, 1:30 AM
26 points
9 comments1 min readLW link
(www.jefftk.com)

Why In­stru­men­tal Goals are not a big AI Safety Problem

Jonathan PaulsonApr 9, 2022, 12:10 AM
0 points
7 comments3 min readLW link

A method of writ­ing con­tent eas­ily with lit­tle anxiety

jessicataApr 8, 2022, 10:11 PM
64 points
19 comments3 min readLW link
(unstableontology.com)

Good Heart Dona­tion Lot­tery Winner

Gordon Seidoh WorleyApr 8, 2022, 8:34 PM
21 points
0 comments1 min readLW link

Roam Re­search Mo­bile is Out!

Logan RiggsApr 8, 2022, 7:05 PM
12 points
0 comments1 min readLW link

Progress Re­port 4: logit lens redux

Nathan Helm-BurgerApr 8, 2022, 6:35 PM
4 points
0 comments2 min readLW link

[Question] What would the cre­ation of al­igned AGI look like for us?

PerhapsApr 8, 2022, 6:05 PM
3 points
4 comments1 min readLW link