[Question] Is LessWrong a “clas­sic style in­tel­lec­tual world”?

Gordon Seidoh WorleyFeb 26, 2019, 9:33 PM
30 points
6 comments1 min readLW link

In­for­mal Post on Motivation

RubyFeb 23, 2019, 11:35 PM
29 points
4 comments8 min readLW link

Who wants to be a Million­aire?

BuckyFeb 1, 2019, 2:02 PM
29 points
1 comment11 min readLW link

Con­struct­ing Goodhart

johnswentworthFeb 3, 2019, 9:59 PM
29 points
10 comments3 min readLW link

Quan­tify­ing an­thropic effects on the Fermi paradox

Lukas FinnvedenFeb 15, 2019, 10:51 AM
29 points
5 comments27 min readLW link

Make an ap­point­ment with your saner self

MalcolmOceanFeb 8, 2019, 5:05 AM
28 points
0 comments4 min readLW link

[Question] How im­por­tant is it that LW has an un­limited sup­ply of karma?

Bird ConceptFeb 11, 2019, 1:41 AM
27 points
9 comments2 min readLW link

Func­tional silence: com­mu­ni­ca­tion that min­i­mizes change of re­ceiver’s beliefs

chaosmageFeb 12, 2019, 9:32 PM
27 points
5 comments2 min readLW link

Pre­dic­tive Rea­son­ing Systems

ozziegooenFeb 20, 2019, 7:44 PM
27 points
2 comments5 min readLW link

[Question] What’s your fa­vorite LessWrong post?

pepe_primeFeb 21, 2019, 10:39 AM
27 points
8 comments1 min readLW link

Kocherga’s leaflet

Slava MatyukhinFeb 19, 2019, 12:06 PM
26 points
2 comments1 min readLW link

Ex­traor­di­nary ethics re­quire ex­traor­di­nary arguments

aaqFeb 17, 2019, 2:59 PM
26 points
6 comments2 min readLW link

[Question] How could “Kick­starter for Inad­e­quate Equil­ibria” be used for evil or turn out to be net-nega­tive?

RaemonFeb 21, 2019, 9:36 PM
25 points
17 comments1 min readLW link

Align­ment Newslet­ter #45

Rohin ShahFeb 14, 2019, 2:10 AM
25 points
2 comments8 min readLW link
(mailchi.mp)

Re­in­force­ment Learn­ing in the Iter­ated Am­plifi­ca­tion Framework

William_SFeb 9, 2019, 12:56 AM
25 points
12 comments4 min readLW link

Thoughts on Ben Garfinkel’s “How sure are we about this AI stuff?”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:09 PM
25 points
17 comments1 min readLW link

Would I think for ten thou­sand years?

Stuart_ArmstrongFeb 11, 2019, 7:37 PM
25 points
13 comments1 min readLW link

AI Safety Pr­ereq­ui­sites Course: Re­vamp and New Lessons

philip_bFeb 3, 2019, 9:04 PM
24 points
5 comments1 min readLW link

[Question] When does in­tro­spec­tion avoid the pit­falls of ru­mi­na­tion?

rkFeb 20, 2019, 2:14 PM
24 points
12 comments1 min readLW link

Ra­tion­al­ist Vi­pas­sana Med­i­ta­tion Retreat

DreamFlasherFeb 25, 2019, 10:10 AM
24 points
2 comments1 min readLW link

Ideas for Next Gen­er­a­tion Pre­dic­tion Technologies

ozziegooenFeb 21, 2019, 11:38 AM
22 points
25 comments7 min readLW link

Can We Place Trust in Post-AGI Fore­cast­ing Eval­u­a­tions?

ozziegooenFeb 17, 2019, 7:20 PM
22 points
16 comments2 min readLW link

Three Kinds of Re­search Doc­u­ments: Ex­plo­ra­tion, Ex­pla­na­tion, Academic

ozziegooenFeb 13, 2019, 9:25 PM
22 points
18 comments3 min readLW link

Co­op­er­a­tion is for Winners

Jacob FalkovichFeb 15, 2019, 2:58 PM
21 points
6 comments4 min readLW link

[Question] If Ra­tion­al­ity can be likened to a ‘Mar­tial Art’, what would be the Forms?

Bae's TheoremFeb 6, 2019, 5:48 AM
21 points
10 comments1 min readLW link

Se­cu­rity amplification

paulfchristianoFeb 6, 2019, 5:28 PM
21 points
2 comments13 min readLW link

So You Want to Colonize The Universe

DiffractorFeb 27, 2019, 10:17 AM
21 points
18 comments6 min readLW link

[Question] What are some of bizarre the­o­ries based on an­thropic rea­son­ing?

Dr. JamchieFeb 3, 2019, 6:48 PM
21 points
13 comments1 min readLW link

So You Want to Colonize The Uni­verse Part 5: The Ac­tual Design

DiffractorFeb 27, 2019, 10:23 AM
20 points
4 comments5 min readLW link

Nuances with as­crip­tion universality

evhubFeb 12, 2019, 11:38 PM
20 points
1 comment2 min readLW link

So You Want To Colonize The Uni­verse Part 3: Dust

DiffractorFeb 27, 2019, 10:20 AM
20 points
9 comments7 min readLW link

Im­pact Prizes as an al­ter­na­tive to Cer­tifi­cates of Impact

ozziegooenFeb 20, 2019, 12:46 AM
20 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

On Long and In­sight­ful Posts

QriaFeb 13, 2019, 3:52 AM
19 points
3 comments1 min readLW link

Au­to­mated Nomic Game 2

jefftkFeb 5, 2019, 10:11 PM
19 points
2 comments2 min readLW link

Open Thread Fe­bru­ary 2019

ryan_bFeb 7, 2019, 6:00 PM
19 points
19 comments1 min readLW link

Fight­ing the al­lure of de­pres­sive realism

aaqFeb 10, 2019, 4:46 PM
19 points
2 comments3 min readLW link

Lay­ers of Ex­per­tise and the Curse of Curiosity

GyrodiotFeb 12, 2019, 11:41 PM
19 points
1 comment6 min readLW link

How to no­tice be­ing mind-hacked

ShmiFeb 2, 2019, 11:13 PM
18 points
22 comments2 min readLW link

[Question] Is there an as­surance-con­tract web­site in work?

Yoav RavidFeb 20, 2019, 6:14 AM
18 points
31 comments1 min readLW link

Align­ment Newslet­ter #44

Rohin ShahFeb 6, 2019, 8:30 AM
18 points
0 comments9 min readLW link
(mailchi.mp)

[Question] Na­tive men­tal rep­re­sen­ta­tions that give huge speedups on prob­lems?

two-ox-headsFeb 25, 2019, 11:42 PM
17 points
4 comments2 min readLW link

So You Want to Colonize the Uni­verse Part 2: Deep Time Engineering

DiffractorFeb 27, 2019, 10:18 AM
17 points
6 comments4 min readLW link

[Question] Who owns OpenAI’s new lan­guage model?

ioannesFeb 14, 2019, 5:51 PM
16 points
9 comments1 min readLW link

So you want to be a wizard

NaiveTortoiseFeb 15, 2019, 3:43 PM
16 points
0 comments1 min readLW link
(jvns.ca)

Robin Han­son on Lump­iness of AI Services

DanielFilanFeb 17, 2019, 11:08 PM
15 points
2 comments2 min readLW link
(www.overcomingbias.com)

Jan­uary 2019 gw­ern.net newsletter

gwernFeb 4, 2019, 3:53 PM
15 points
0 comments1 min readLW link
(www.gwern.net)

[Link] OpenAI on why we need so­cial scientists

ioannesFeb 19, 2019, 4:59 PM
14 points
3 comments1 min readLW link

[Question] Where to find Base Rates?

adam demirelFeb 26, 2019, 10:44 AM
14 points
7 comments1 min readLW link

So You Want to Colonize The Uni­verse Part 4: Ve­loc­ity Changes and Energy

DiffractorFeb 27, 2019, 10:22 AM
14 points
9 comments10 min readLW link

My use of the phrase “Su­per-Hu­man Feed­back”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:11 PM
13 points
0 comments1 min readLW link