TSR #3 En­train­ment: Discussion

HazardDec 1, 2017, 4:46 PM
11 points
3 comments4 min readLW link

MIRI’s 2017 Fundraiser

MaloDec 1, 2017, 1:45 PM
19 points
4 comments13 min readLW link

Com­ment on SSC’s Re­view of Inad­e­quate Equilibria

Ben PaceDec 1, 2017, 11:46 AM
13 points
5 comments2 min readLW link

Cash trans­fers are not nec­es­sar­ily wealth transfers

BenquoDec 1, 2017, 10:10 AM
59 points
36 comments11 min readLW link
(benjaminrosshoffman.com)

De­cem­ber 2017 Me­dia Thread

ArisKatsarisDec 1, 2017, 9:02 AM
1 point
16 comments1 min readLW link

Policy Selec­tion Solves Most Problems

abramdemskiDec 1, 2017, 12:35 AM
21 points
7 comments13 min readLW link

A limit to pun­ish­ment

RSTNov 30, 2017, 5:08 PM
4 points
8 comments2 min readLW link

Im­proved for­mal­ism for cor­rup­tion in DIRL

Vanessa KosoyNov 30, 2017, 4:52 PM
0 points
0 comments2 min readLW link

Deep­Mind ar­ti­cle: AI Safety Gridworlds

scarcegreengrassNov 30, 2017, 4:13 PM
25 points
6 comments1 min readLW link
(deepmind.com)

Why DRL doesn’t work for ar­bi­trary environments

Vanessa KosoyNov 30, 2017, 12:22 PM
0 points
0 comments3 min readLW link

The Im­pos­si­bil­ity of the In­tel­li­gence Ex­plo­sion

DragonGodNov 30, 2017, 5:47 AM
−4 points
10 comments1 min readLW link

Log­i­cal Up­date­less­ness as a Ro­bust Del­e­ga­tion Problem

Scott GarrabrantNov 30, 2017, 4:23 AM
0 points
1 comment2 min readLW link

Ex­am­ples of Miti­gat­ing As­sump­tion Risk

SatvikBeriNov 30, 2017, 2:09 AM
26 points
14 comments1 min readLW link

LDL 7: I wish I had a map

magfrumpNov 30, 2017, 2:03 AM
13 points
2 comments3 min readLW link

The Mad Scien­tist De­ci­sion Problem

Linda LinseforsNov 29, 2017, 11:41 AM
6 points
20 comments1 min readLW link

The Right to be Wrong

sarahconstantinNov 28, 2017, 11:43 PM
48 points
9 comments6 min readLW link

Any Good Crit­i­cism of Karl Pop­per’s Episte­mol­ogy?

Elliot_TempleNov 28, 2017, 10:31 PM
−8 points
33 comments3 min readLW link

Free Speech as Le­gal Right vs. Eth­i­cal Value

ozymandiasNov 28, 2017, 4:49 PM
14 points
8 comments2 min readLW link

Stable agent, sub­agent-unstable

Stuart_ArmstrongNov 28, 2017, 4:05 PM
0 points
0 comments2 min readLW link

Stable agent, sub­agent-unstable

Stuart_ArmstrongNov 28, 2017, 4:04 PM
2 points
0 comments2 min readLW link

Re­ward learn­ing summary

Stuart_ArmstrongNov 28, 2017, 3:55 PM
0 points
1 comment1 min readLW link

Mes­sage to Any Fu­ture AI: “There are sev­eral in­stru­men­tal rea­sons why ex­ter­mi­nat­ing hu­man­ity is not in your in­ter­est”

avturchinNov 28, 2017, 3:39 PM
0 points
0 comments1 min readLW link
(docs.google.com)

Big Ad­vance in In­finite Ethics

bwestNov 28, 2017, 3:10 PM
32 points
13 comments5 min readLW link

USA v Pro­gres­sive 1979 excerpt

RyanCareyNov 27, 2017, 5:32 PM
22 points
2 comments2 min readLW link

You Have the Right to Think

ZviNov 27, 2017, 2:10 AM
17 points
2 comments3 min readLW link
(thezvi.wordpress.com)

Se­cu­rity Mind­set and the Lo­gis­tic Suc­cess Curve

Eliezer YudkowskyNov 26, 2017, 3:58 PM
106 points
49 comments20 min readLW link

An In­tu­itive Ex­pla­na­tion of In­fer­en­tial Distance

RichardJActonNov 26, 2017, 2:13 PM
14 points
6 comments3 min readLW link

Chang­ing habits for open threads

HazardNov 26, 2017, 12:54 PM
3 points
4 comments2 min readLW link

Let­ter from Utopia: Talk­ing to Nick Bostrom

morganismNov 25, 2017, 10:19 PM
2 points
2 comments1 min readLW link
(blog.lareviewofbooks.org)

Se­cu­rity Mind­set and Or­di­nary Paranoia

Eliezer YudkowskyNov 25, 2017, 5:53 PM
132 points
25 comments29 min readLW link

The Dar­win Results

ZviNov 25, 2017, 1:30 PM
53 points
10 comments5 min readLW link
(thezvi.wordpress.com)

Some mind experiments

RSTNov 25, 2017, 12:46 PM
5 points
2 comments2 min readLW link

Mes­sage to Any Fu­ture AI: “There are sev­eral in­stru­men­tal rea­sons why ex­ter­mi­nat­ing hu­man­ity is not in your in­ter­est”

turchinNov 25, 2017, 11:44 AM
0 points
19 comments1 min readLW link
(www.lesserwrong.com)

Mes­sage to any fu­ture AI: “There are sev­eral in­stru­men­tal rea­sons why ex­ter­mi­nat­ing hu­man­ity is not in your in­ter­est” [AI al­ign­ment prize en­try]

avturchinNov 25, 2017, 11:28 AM
3 points
22 comments67 min readLW link

Com­mu­ni­ties you might join thread

whpearsonNov 25, 2017, 9:07 AM
6 points
13 comments1 min readLW link

Un­jus­tified ideas com­ment thread

MrRobotNov 24, 2017, 8:15 PM
8 points
24 comments1 min readLW link

Time­less Modesty?

abramdemskiNov 24, 2017, 11:12 AM
17 points
2 comments3 min readLW link

Gears Level & Policy Level

abramdemskiNov 24, 2017, 7:17 AM
63 points
8 comments7 min readLW link

List of civil­i­sa­tional inadequacy

ChristianKlNov 23, 2017, 1:56 PM
36 points
52 comments1 min readLW link

Open Let­ter to MIRI + Tons of In­ter­est­ing Discussion

curiNov 22, 2017, 9:16 PM
−12 points
162 comments1 min readLW link
(curi.us)

Open thread, Novem­ber 21 - Novem­ber 28, 2017

ChristianKlNov 22, 2017, 7:32 PM
3 points
0 comments1 min readLW link

Fire drill proposal

MrRobotNov 22, 2017, 7:07 PM
−12 points
7 comments1 min readLW link

A Day in Utopia

ozymandiasNov 22, 2017, 4:57 PM
26 points
10 comments5 min readLW link

Ci­vil­ity Is Never Neutral

ozymandiasNov 22, 2017, 4:54 PM
57 points
15 comments4 min readLW link

Next nar­row-AI challenge proposal

MrRobotNov 22, 2017, 11:32 AM
−7 points
4 comments1 min readLW link

An Ed­u­ca­tional Cur­ricu­lum

DragonGodNov 22, 2017, 10:11 AM
2 points
6 comments3 min readLW link

Catas­tro­phe Miti­ga­tion Us­ing DRL

Vanessa KosoyNov 22, 2017, 5:54 AM
7 points
3 comments15 min readLW link

For fan­tasy fans

MrRobotNov 22, 2017, 4:27 AM
−11 points
0 comments1 min readLW link

Tags or Sub-Groups

Chris_LeongNov 21, 2017, 11:28 PM
6 points
5 comments2 min readLW link

Hero Licensing

Eliezer YudkowskyNov 21, 2017, 9:13 PM
241 points
83 comments52 min readLW link