In­for­mal Post on Motivation

RubyFeb 23, 2019, 11:35 PM
29 points
4 comments8 min readLW link

Can HCH epistem­i­cally dom­i­nate Ra­manu­jan?

zhukeepaFeb 23, 2019, 10:00 PM
33 points
6 comments2 min readLW link

AI—In­tel­li­gence Real­is­ing Itself

TPATAFeb 23, 2019, 9:13 PM
−4 points
0 comments3 min readLW link

Can an AI Have Feel­ings? or that satis­fy­ing crunch when you throw Alexa against a wall

SebastianG Feb 23, 2019, 5:48 PM
8 points
19 comments4 min readLW link

“Other peo­ple are wrong” vs “I am right”

BuckFeb 22, 2019, 8:01 PM
267 points
20 comments9 min readLW link2 reviews

Tiles: Re­port on Pro­gram­matic Code Generation

Martin SustrikFeb 22, 2019, 12:10 AM
5 points
5 comments6 min readLW link
(250bpm.com)

Align­ment Newslet­ter #46

Rohin ShahFeb 22, 2019, 12:10 AM
12 points
0 comments9 min readLW link
(mailchi.mp)

[Question] How could “Kick­starter for Inad­e­quate Equil­ibria” be used for evil or turn out to be net-nega­tive?

RaemonFeb 21, 2019, 9:36 PM
25 points
17 comments1 min readLW link

[Question] If a “Kick­starter for Inad­e­quate Equlibria” was built, do you have a con­crete in­ad­e­quate equil­ibrium to fix?

RaemonFeb 21, 2019, 9:32 PM
56 points
40 comments1 min readLW link

Life, not a game

ArthurLidiaFeb 21, 2019, 7:10 PM
−10 points
2 comments2 min readLW link

Ideas for Next Gen­er­a­tion Pre­dic­tion Technologies

ozziegooenFeb 21, 2019, 11:38 AM
22 points
25 comments7 min readLW link

[Question] What’s your fa­vorite LessWrong post?

pepe_primeFeb 21, 2019, 10:39 AM
27 points
8 comments1 min readLW link

Thoughts on Hu­man Models

Feb 21, 2019, 9:10 AM
127 points
32 comments10 min readLW link1 review

Two Small Ex­per­i­ments on GPT-2

jimrandomhFeb 21, 2019, 2:59 AM
54 points
28 comments1 min readLW link

Pre­dic­tive Rea­son­ing Systems

ozziegooenFeb 20, 2019, 7:44 PM
27 points
2 comments5 min readLW link

LessWrong DC: Age of En­light­en­ment

rusalkiiFeb 20, 2019, 6:39 PM
1 point
0 comments1 min readLW link

[Question] When does in­tro­spec­tion avoid the pit­falls of ru­mi­na­tion?

rkFeb 20, 2019, 2:14 PM
24 points
12 comments1 min readLW link

What i learned giv­ing a lec­ture on NVC

Yoav RavidFeb 20, 2019, 9:08 AM
13 points
2 comments2 min readLW link

Pavlov Generalizes

abramdemskiFeb 20, 2019, 9:03 AM
67 points
4 comments7 min readLW link

Leukemia Has Won

CapybasiliskFeb 20, 2019, 7:11 AM
1 point
2 comments1 min readLW link
(alex.blog)

[Question] Is there an as­surance-con­tract web­site in work?

Yoav RavidFeb 20, 2019, 6:14 AM
18 points
31 comments1 min readLW link

First steps of a ra­tio­nal­ity skill bootstrap

hamnoxFeb 20, 2019, 12:57 AM
10 points
0 comments6 min readLW link

Im­pact Prizes as an al­ter­na­tive to Cer­tifi­cates of Impact

ozziegooenFeb 20, 2019, 12:46 AM
20 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

De-Bugged brains wanted

marcus_gablerFeb 19, 2019, 6:30 PM
−16 points
17 comments1 min readLW link

[Link] OpenAI on why we need so­cial scientists

ioannesFeb 19, 2019, 4:59 PM
14 points
3 comments1 min readLW link

Kocherga’s leaflet

Slava MatyukhinFeb 19, 2019, 12:06 PM
26 points
2 comments1 min readLW link

Blackmail

ZviFeb 19, 2019, 3:50 AM
133 points
55 comments16 min readLW link2 reviews
(thezvi.wordpress.com)

De­cel­er­at­ing: laser vs gun vs rocket

Stuart_ArmstrongFeb 18, 2019, 11:21 PM
31 points
16 comments4 min readLW link

Epistemic Tenure

Scott GarrabrantFeb 18, 2019, 10:56 PM
89 points
27 comments3 min readLW link

[Question] A Strange Situation

Flange FinneganFeb 18, 2019, 8:38 PM
12 points
10 comments1 min readLW link

Im­pli­ca­tions of GPT-2

GurkenglasFeb 18, 2019, 10:57 AM
38 points
28 comments1 min readLW link

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson QuinnFeb 17, 2019, 11:45 PM
42 points
14 comments6 min readLW link

Avoid­ing Jar­gon Confusion

RaemonFeb 17, 2019, 11:37 PM
46 points
35 comments4 min readLW link

Robin Han­son on Lump­iness of AI Services

DanielFilanFeb 17, 2019, 11:08 PM
15 points
2 comments2 min readLW link
(www.overcomingbias.com)

The Clock­maker’s Ar­gu­ment (But not Really)

GregorDeVillainFeb 17, 2019, 9:20 PM
1 point
3 comments3 min readLW link

Can We Place Trust in Post-AGI Fore­cast­ing Eval­u­a­tions?

ozziegooenFeb 17, 2019, 7:20 PM
22 points
16 comments2 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoiseFeb 17, 2019, 6:28 PM
6 points
2 comments1 min readLW link

Cam­bridge SSC Meetup

NoSignalNoNoise17 Feb 2019 18:27 UTC
6 points
0 comments1 min readLW link

Ex­traor­di­nary ethics re­quire ex­traor­di­nary arguments

aaq17 Feb 2019 14:59 UTC
26 points
6 comments2 min readLW link

Limit­ing an AGI’s Con­text Temporally

EulersApprentice17 Feb 2019 3:29 UTC
5 points
11 comments1 min readLW link

Ma­jor Dona­tion: Long Term Fu­ture Fund Ap­pli­ca­tion Ex­tended 1 Week

habryka16 Feb 2019 23:30 UTC
42 points
3 comments1 min readLW link

Games in Kocherga club: Fal­la­cy­ma­nia, Tower of Chaos, Scien­tific Discovery

Alexander23016 Feb 2019 22:29 UTC
3 points
2 comments1 min readLW link

[Question] Is there a way to hire aca­demics hourly?

Ixiel16 Feb 2019 14:21 UTC
6 points
2 comments1 min readLW link

Grace­ful Shutdown

Martin Sustrik16 Feb 2019 11:30 UTC
10 points
4 comments13 min readLW link
(250bpm.com)

[Question] Why didn’t Agoric Com­put­ing be­come pop­u­lar?

Wei Dai16 Feb 2019 6:19 UTC
52 points
22 comments2 min readLW link

Ped­a­gogy as Struggle

lifelonglearner16 Feb 2019 2:12 UTC
13 points
9 comments2 min readLW link

How the MtG Color Wheel Ex­plains AI Safety

Scott Garrabrant15 Feb 2019 23:42 UTC
65 points
4 comments6 min readLW link

Some dis­junc­tive rea­sons for ur­gency on AI risk

Wei Dai15 Feb 2019 20:43 UTC
36 points
24 comments1 min readLW link

So you want to be a wizard

NaiveTortoise15 Feb 2019 15:43 UTC
16 points
0 comments1 min readLW link
(jvns.ca)

Co­op­er­a­tion is for Winners

Jacob Falkovich15 Feb 2019 14:58 UTC
21 points
6 comments4 min readLW link