Sum­mary: “How to Write Quickly...” by John Wentworth

Pablo Repetto11 Apr 2022 23:26 UTC
4 points
0 comments2 min readLW link
(pabloernesto.github.io)

Ram­bling thoughts on hav­ing mul­ti­ple selves

cranberry_bear11 Apr 2022 22:43 UTC
15 points
1 comment3 min readLW link

An AI-in-a-box suc­cess model

azsantosk11 Apr 2022 22:28 UTC
16 points
1 comment10 min readLW link

The Reg­u­la­tory Op­tion: A re­sponse to near 0% sur­vival odds

Matthew Lowenstein11 Apr 2022 22:00 UTC
46 points
21 comments6 min readLW link

The Effi­cient LessWrong Hy­poth­e­sis—Stock In­vest­ing Competition

MrThink11 Apr 2022 20:43 UTC
30 points
35 comments2 min readLW link

Re­view: Struc­ture and In­ter­pre­ta­tion of Com­puter Programs

L Rudolf L11 Apr 2022 20:27 UTC
16 points
9 comments10 min readLW link
(www.strataoftheworld.com)

[Question] Un­der­ap­pre­ci­ated con­tent on LessWrong

Ege Erdil11 Apr 2022 17:40 UTC
22 points
5 comments1 min readLW link

Edit­ing Ad­vice for LessWrong Users

JustisMills11 Apr 2022 16:32 UTC
231 points
14 comments6 min readLW link1 review

Post-his­tory is writ­ten by the martyrs

Veedrac11 Apr 2022 15:45 UTC
50 points
2 comments19 min readLW link
(www.royalroad.com)

What Chords Do You Need?

jefftk11 Apr 2022 15:00 UTC
11 points
0 comments3 min readLW link
(www.jefftk.com)

What can peo­ple not smart/​tech­ni­cal/​”com­pe­tent” enough for AI re­search/​AI risk work do to re­duce AI-risk/​max­i­mize AI safety? (which is most peo­ple?)

Alex K. Chen (parrot)11 Apr 2022 14:05 UTC
7 points
3 comments3 min readLW link

Good­hart’s Law Causal Diagrams

11 Apr 2022 13:52 UTC
32 points
5 comments6 min readLW link

China Covid Up­date #1

Zvi11 Apr 2022 13:40 UTC
88 points
22 comments3 min readLW link
(thezvi.wordpress.com)

ACX Meetup Copen­hagen, Denmark

Søren Elverlin11 Apr 2022 11:53 UTC
4 points
0 comments1 min readLW link

Is it time to start think­ing about what AI Friendli­ness means?

ZT511 Apr 2022 9:32 UTC
18 points
6 comments3 min readLW link

[Question] Is there an equiv­a­lent of the CDF for grad­ing pre­dic­tions?

Optimization Process11 Apr 2022 5:30 UTC
6 points
5 comments1 min readLW link

[Question] Im­pact­ful data sci­ence projects

Just Learning11 Apr 2022 4:27 UTC
4 points
2 comments1 min readLW link

[Question] Could we set a re­s­olu­tion/​stop­per for the up­per bound of the util­ity func­tion of an AI?

FinalFormal211 Apr 2022 3:10 UTC
−5 points
2 comments1 min readLW link

Epistemic Slipperiness

Raemon11 Apr 2022 1:48 UTC
55 points
17 comments7 min readLW link

[Question] What is the most effi­cient way to cre­ate more wor­lds in the many wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics?

seank11 Apr 2022 0:26 UTC
4 points
11 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

Yitz10 Apr 2022 21:02 UTC
92 points
141 comments2 min readLW link

Emo­tion­ally Con­fronting a Prob­a­bly-Doomed World: Against Mo­ti­va­tion Via Dig­nity Points

TurnTrout10 Apr 2022 18:45 UTC
151 points
7 comments9 min readLW link

[Question] Does non-ac­cess to out­puts pre­vent re­cur­sive self-im­prove­ment?

Gunnar_Zarncke10 Apr 2022 18:37 UTC
15 points
0 comments1 min readLW link

A Brief Ex­cur­sion Into Molec­u­lar Neuroscience

Jan10 Apr 2022 17:55 UTC
48 points
8 comments19 min readLW link
(universalprior.substack.com)

Fi­nally En­ter­ing Alignment

Ulisse Mini10 Apr 2022 17:01 UTC
79 points
8 comments2 min readLW link

Schel­ling Meetup Toronto

Sean Aubin10 Apr 2022 13:58 UTC
3 points
0 comments1 min readLW link

Is Fish­e­rian Ru­n­away Gra­di­ent Hack­ing?

Ryan Kidd10 Apr 2022 13:47 UTC
15 points
6 comments4 min readLW link

Worse than an un­al­igned AGI

shminux10 Apr 2022 3:35 UTC
−1 points
11 comments1 min readLW link

Time-Time Tradeoffs

Akash10 Apr 2022 2:33 UTC
17 points
1 comment3 min readLW link
(forum.effectivealtruism.org)

Bos­ton Con­tra: Fully Gen­der-Free

jefftk10 Apr 2022 0:40 UTC
3 points
12 comments1 min readLW link
(www.jefftk.com)

[Question] Hid­den com­ments set­tings not work­ing?

TLW9 Apr 2022 23:15 UTC
4 points
2 comments1 min readLW link

God­shat­ter Ver­sus Leg­i­bil­ity: A Fun­da­men­tally Differ­ent Ap­proach To AI Alignment

LukeOnline9 Apr 2022 21:43 UTC
15 points
14 comments7 min readLW link

A con­crete bet offer to those with short AGI timelines

9 Apr 2022 21:41 UTC
198 points
116 comments5 min readLW link

New: use The Non­lin­ear Library to listen to the top LessWrong posts of all time

KatWoods9 Apr 2022 20:50 UTC
39 points
9 comments8 min readLW link

140 Cog­ni­tive Bi­ases You Should Know

André Ferretti9 Apr 2022 17:15 UTC
7 points
7 comments1 min readLW link

Strate­gies for keep­ing AIs nar­row in the short term

Rossin9 Apr 2022 16:42 UTC
9 points
3 comments3 min readLW link

Hyper­bolic takeoff

Ege Erdil9 Apr 2022 15:57 UTC
17 points
7 comments10 min readLW link
(www.metaculus.com)

Elicit: Lan­guage Models as Re­search Assistants

9 Apr 2022 14:56 UTC
71 points
6 comments13 min readLW link

Emer­gent Ven­tures/​Sch­midt (new grantor for in­di­vi­d­ual re­searchers)

gwern9 Apr 2022 14:41 UTC
21 points
6 comments1 min readLW link
(marginalrevolution.com)

AI safety: the ul­ti­mate trol­ley problem

chaosmage9 Apr 2022 12:05 UTC
−21 points
6 comments1 min readLW link

AMA Con­jec­ture, A New Align­ment Startup

adamShimi9 Apr 2022 9:43 UTC
47 points
42 comments1 min readLW link

[Question] What ad­vice do you have for some­one strug­gling to de­tach their grim-o-me­ter?

Zorger749 Apr 2022 7:35 UTC
6 points
3 comments1 min readLW link

[Question] Can AI sys­tems have ex­tremely im­pres­sive out­puts and also not need to be al­igned be­cause they aren’t gen­eral enough or some­thing?

WilliamKiely9 Apr 2022 6:03 UTC
6 points
3 comments1 min readLW link

Buy-in Be­fore Randomization

jefftk9 Apr 2022 1:30 UTC
26 points
9 comments1 min readLW link
(www.jefftk.com)

Why In­stru­men­tal Goals are not a big AI Safety Problem

Jonathan Paulson9 Apr 2022 0:10 UTC
0 points
7 comments3 min readLW link

A method of writ­ing con­tent eas­ily with lit­tle anxiety

jessicata8 Apr 2022 22:11 UTC
64 points
19 comments3 min readLW link
(unstableontology.com)

Good Heart Dona­tion Lot­tery Winner

Gordon Seidoh Worley8 Apr 2022 20:34 UTC
21 points
0 comments1 min readLW link

Roam Re­search Mo­bile is Out!

Logan Riggs8 Apr 2022 19:05 UTC
12 points
0 comments1 min readLW link

Progress Re­port 4: logit lens redux

Nathan Helm-Burger8 Apr 2022 18:35 UTC
3 points
0 comments2 min readLW link

[Question] What would the cre­ation of al­igned AGI look like for us?

Perhaps8 Apr 2022 18:05 UTC
3 points
4 comments1 min readLW link