[Question] How could “Kick­starter for Inad­e­quate Equil­ibria” be used for evil or turn out to be net-nega­tive?

RaemonFeb 21, 2019, 9:36 PM
25 points
17 comments1 min readLW link

[Question] If a “Kick­starter for Inad­e­quate Equlibria” was built, do you have a con­crete in­ad­e­quate equil­ibrium to fix?

RaemonFeb 21, 2019, 9:32 PM
56 points
40 comments1 min readLW link

Life, not a game

ArthurLidiaFeb 21, 2019, 7:10 PM
−10 points
2 comments2 min readLW link

Ideas for Next Gen­er­a­tion Pre­dic­tion Technologies

ozziegooenFeb 21, 2019, 11:38 AM
22 points
25 comments7 min readLW link

[Question] What’s your fa­vorite LessWrong post?

pepe_primeFeb 21, 2019, 10:39 AM
27 points
8 comments1 min readLW link

Thoughts on Hu­man Models

Feb 21, 2019, 9:10 AM
127 points
32 comments10 min readLW link1 review

Two Small Ex­per­i­ments on GPT-2

jimrandomhFeb 21, 2019, 2:59 AM
54 points
28 comments1 min readLW link

Pre­dic­tive Rea­son­ing Systems

ozziegooenFeb 20, 2019, 7:44 PM
27 points
2 comments5 min readLW link

LessWrong DC: Age of En­light­en­ment

rusalkiiFeb 20, 2019, 6:39 PM
1 point
0 comments1 min readLW link

[Question] When does in­tro­spec­tion avoid the pit­falls of ru­mi­na­tion?

rkFeb 20, 2019, 2:14 PM
24 points
12 comments1 min readLW link

What i learned giv­ing a lec­ture on NVC

Yoav RavidFeb 20, 2019, 9:08 AM
13 points
2 comments2 min readLW link

Pavlov Generalizes

abramdemskiFeb 20, 2019, 9:03 AM
67 points
4 comments7 min readLW link

Leukemia Has Won

CapybasiliskFeb 20, 2019, 7:11 AM
1 point
2 comments1 min readLW link
(alex.blog)

[Question] Is there an as­surance-con­tract web­site in work?

Yoav RavidFeb 20, 2019, 6:14 AM
18 points
31 comments1 min readLW link

First steps of a ra­tio­nal­ity skill bootstrap

hamnoxFeb 20, 2019, 12:57 AM
10 points
0 comments6 min readLW link

Im­pact Prizes as an al­ter­na­tive to Cer­tifi­cates of Impact

ozziegooenFeb 20, 2019, 12:46 AM
20 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

De-Bugged brains wanted

marcus_gablerFeb 19, 2019, 6:30 PM
−16 points
17 comments1 min readLW link

[Link] OpenAI on why we need so­cial scientists

ioannesFeb 19, 2019, 4:59 PM
14 points
3 comments1 min readLW link

Kocherga’s leaflet

Slava MatyukhinFeb 19, 2019, 12:06 PM
26 points
2 comments1 min readLW link

Blackmail

ZviFeb 19, 2019, 3:50 AM
133 points
55 comments16 min readLW link2 reviews
(thezvi.wordpress.com)

De­cel­er­at­ing: laser vs gun vs rocket

Stuart_ArmstrongFeb 18, 2019, 11:21 PM
31 points
16 comments4 min readLW link

Epistemic Tenure

Scott GarrabrantFeb 18, 2019, 10:56 PM
89 points
27 comments3 min readLW link

[Question] A Strange Situation

Flange FinneganFeb 18, 2019, 8:38 PM
12 points
10 comments1 min readLW link

Im­pli­ca­tions of GPT-2

GurkenglasFeb 18, 2019, 10:57 AM
38 points
28 comments1 min readLW link

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson QuinnFeb 17, 2019, 11:45 PM
42 points
14 comments6 min readLW link

Avoid­ing Jar­gon Confusion

RaemonFeb 17, 2019, 11:37 PM
46 points
35 comments4 min readLW link

Robin Han­son on Lump­iness of AI Services

DanielFilanFeb 17, 2019, 11:08 PM
15 points
2 comments2 min readLW link
(www.overcomingbias.com)

The Clock­maker’s Ar­gu­ment (But not Really)

GregorDeVillainFeb 17, 2019, 9:20 PM
1 point
3 comments3 min readLW link

Can We Place Trust in Post-AGI Fore­cast­ing Eval­u­a­tions?

ozziegooenFeb 17, 2019, 7:20 PM
22 points
16 comments2 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoiseFeb 17, 2019, 6:28 PM
6 points
2 comments1 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoiseFeb 17, 2019, 6:27 PM
6 points
0 comments1 min readLW link

Ex­traor­di­nary ethics re­quire ex­traor­di­nary arguments

aaqFeb 17, 2019, 2:59 PM
26 points
6 comments2 min readLW link

Limit­ing an AGI’s Con­text Temporally

EulersApprenticeFeb 17, 2019, 3:29 AM
5 points
11 comments1 min readLW link

Ma­jor Dona­tion: Long Term Fu­ture Fund Ap­pli­ca­tion Ex­tended 1 Week

habrykaFeb 16, 2019, 11:30 PM
42 points
3 comments1 min readLW link

Games in Kocherga club: Fal­la­cy­ma­nia, Tower of Chaos, Scien­tific Discovery

Alexander230Feb 16, 2019, 10:29 PM
3 points
2 comments1 min readLW link

[Question] Is there a way to hire aca­demics hourly?

IxielFeb 16, 2019, 2:21 PM
6 points
2 comments1 min readLW link

Grace­ful Shutdown

Martin SustrikFeb 16, 2019, 11:30 AM
10 points
4 comments13 min readLW link
(250bpm.com)

[Question] Why didn’t Agoric Com­put­ing be­come pop­u­lar?

Wei DaiFeb 16, 2019, 6:19 AM
52 points
22 comments2 min readLW link

Ped­a­gogy as Struggle

lifelonglearnerFeb 16, 2019, 2:12 AM
13 points
9 comments2 min readLW link

How the MtG Color Wheel Ex­plains AI Safety

Scott GarrabrantFeb 15, 2019, 11:42 PM
65 points
4 comments6 min readLW link

Some dis­junc­tive rea­sons for ur­gency on AI risk

Wei DaiFeb 15, 2019, 8:43 PM
36 points
24 comments1 min readLW link

So you want to be a wizard

NaiveTortoiseFeb 15, 2019, 3:43 PM
16 points
0 comments1 min readLW link
(jvns.ca)

Co­op­er­a­tion is for Winners

Jacob FalkovichFeb 15, 2019, 2:58 PM
21 points
6 comments4 min readLW link

Quan­tify­ing an­thropic effects on the Fermi paradox

Lukas FinnvedenFeb 15, 2019, 10:51 AM
29 points
5 comments27 min readLW link

[Question] How does OpenAI’s lan­guage model af­fect our AI timeline es­ti­mates?

jimrandomhFeb 15, 2019, 3:11 AM
50 points
7 comments1 min readLW link

Has The Func­tion To Sort Posts By Votes Stopped Work­ing?

CapybasiliskFeb 14, 2019, 7:14 PM
1 point
3 comments1 min readLW link

[Question] Who owns OpenAI’s new lan­guage model?

ioannesFeb 14, 2019, 5:51 PM
16 points
9 comments1 min readLW link

The Pre­dic­tion Pyra­mid: Why Fun­da­men­tal Work is Needed for Pre­dic­tion Work

ozziegooenFeb 14, 2019, 4:21 PM
43 points
15 comments3 min readLW link

Short story: An AGI’s Repug­nant Physics Experiment

ozziegooenFeb 14, 2019, 2:46 PM
9 points
5 comments1 min readLW link

New York Res­tau­rants I Love: Breakfast

ZviFeb 14, 2019, 1:10 PM
10 points
3 comments8 min readLW link
(thezvi.wordpress.com)