Cat­e­go­riz­ing Love: How hav­ing more words for love might make it less scary

squidious5 Apr 2018 23:10 UTC
15 points
2 comments3 min readLW link
(opalsandbonobos.blogspot.com)

Re­view of CZEA “In­tense EA Week­end” re­treat

Jan_Kulveit5 Apr 2018 23:04 UTC
26 points
1 comment8 min readLW link
(effective-altruism.com)

Bounds of Attention

squidious5 Apr 2018 22:53 UTC
26 points
1 comment2 min readLW link

Mo­ral Uncertainty

Mati_Roy5 Apr 2018 18:59 UTC
2 points
0 comments1 min readLW link

Schel­ling Day

Mati_Roy5 Apr 2018 18:56 UTC
2 points
0 comments1 min readLW link

Fun The­ory—Group Discussion

Mati_Roy5 Apr 2018 18:54 UTC
2 points
0 comments1 min readLW link

Against Oc­cam’s Razor

zulupineapple5 Apr 2018 17:59 UTC
4 points
20 comments1 min readLW link

Adult Neu­ro­ge­n­e­sis – A Pointed Review

Scott Alexander5 Apr 2018 4:50 UTC
55 points
17 comments11 min readLW link
(slatestarcodex.com)

Wash­ing­ton, D.C.: Less Wrong

RobinZ5 Apr 2018 3:05 UTC
2 points
0 comments1 min readLW link

Death in Groups

ryan_b5 Apr 2018 0:45 UTC
31 points
20 comments2 min readLW link

Com­pe­ti­tion for Power

Samo Burja4 Apr 2018 17:10 UTC
17 points
8 comments9 min readLW link
(medium.com)

Real­is­tic thought experiments

KatjaGrace4 Apr 2018 1:50 UTC
26 points
8 comments1 min readLW link
(meteuphoric.wordpress.com)

HPMoE 3

alkjash4 Apr 2018 1:00 UTC
4 points
0 comments1 min readLW link
(radimentary.wordpress.com)

An Ar­gu­ment For Pri­ori­tiz­ing: “Pos­i­tively Shap­ing the Devel­op­ment of Crypto-as­sets”

rhys_lindmark3 Apr 2018 22:12 UTC
3 points
1 comment1 min readLW link
(effective-altruism.com)

Speci­fi­ca­tion gam­ing ex­am­ples in AI

Vika3 Apr 2018 12:30 UTC
45 points
9 comments1 min readLW link2 reviews

[Draft for com­ment­ing] Near-Term AI risks predictions

avturchin3 Apr 2018 10:29 UTC
6 points
6 comments1 min readLW link

Uny­ield­ing Yoda Timers: Tak­ing the Ham­mer­time Fi­nal Exam

TurnTrout3 Apr 2018 2:38 UTC
16 points
3 comments1 min readLW link

Mus­ings on Exploration

Diffractor3 Apr 2018 2:15 UTC
1 point
4 comments6 min readLW link

Suffer­ing and In­tractable Pain

Gordon Seidoh Worley3 Apr 2018 1:05 UTC
11 points
4 comments7 min readLW link
(mapandterritory.org)

Why Karma 2.0? (A Kab­bal­is­tic Ex­pla­na­tion)

Ben Pace2 Apr 2018 20:43 UTC
15 points
1 comment1 min readLW link

Brno: Far fu­ture, ex­is­ten­tial risk and AI safety

Jan_Kulveit2 Apr 2018 19:11 UTC
3 points
0 comments1 min readLW link

HPMoE 2

alkjash2 Apr 2018 5:30 UTC
6 points
3 comments1 min readLW link
(radimentary.wordpress.com)

In­ter­nal Diet Crux

Jacob Falkovich2 Apr 2018 5:05 UTC
43 points
8 comments6 min readLW link

New York Ra­tion­al­ist Seder

Jacob Falkovich2 Apr 2018 0:24 UTC
3 points
0 comments1 min readLW link

Can cor­rigi­bil­ity be learned safely?

Wei Dai1 Apr 2018 23:07 UTC
35 points
115 comments4 min readLW link

Global in­sect de­clines: Why aren’t we all dead yet?

eukaryote1 Apr 2018 20:38 UTC
28 points
26 comments1 min readLW link

An­nounc­ing Ra­tional Newsletter

Alexey Lapitsky1 Apr 2018 14:37 UTC
10 points
8 comments1 min readLW link

April Fools: An­nounc­ing: Karma 2.0

habryka1 Apr 2018 10:33 UTC
63 points
56 comments1 min readLW link

Life hacks

Jan_Kulveit1 Apr 2018 10:29 UTC
4 points
0 comments1 min readLW link

One-Year An­niver­sary Ret­ro­spec­tive—Los Angeles

RobertM1 Apr 2018 6:34 UTC
12 points
4 comments3 min readLW link

My take on agent foun­da­tions: for­mal­iz­ing metaphilo­soph­i­cal competence

zhukeepa1 Apr 2018 6:33 UTC
21 points
6 comments1 min readLW link

Cor­rigible but mis­al­igned: a su­per­in­tel­li­gent messiah

zhukeepa1 Apr 2018 6:20 UTC
28 points
26 comments5 min readLW link

LW Up­date 3/​31 - Post High­lights and Bug Fixes

Raemon1 Apr 2018 4:01 UTC
10 points
2 comments1 min readLW link

Schel­ling Shifts Dur­ing AI Self-Modification

MikailKhan1 Apr 2018 1:58 UTC
6 points
3 comments6 min readLW link

Refram­ing mis­al­igned AGI’s: well-in­ten­tioned non-neu­rotyp­i­cal assistants

zhukeepa1 Apr 2018 1:22 UTC
46 points
14 comments2 min readLW link

The Reg­u­lariz­ing-Re­duc­ing Model

RyenKrusinga1 Apr 2018 1:16 UTC
3 points
6 comments1 min readLW link
(drive.google.com)

Me­taphilo­soph­i­cal com­pe­tence can’t be dis­en­tan­gled from alignment

zhukeepa1 Apr 2018 0:38 UTC
34 points
39 comments3 min readLW link

Belief alignment

hnowak1 Apr 2018 0:13 UTC
1 point
2 comments6 min readLW link

A Sketch of Good Communication

Ben Pace31 Mar 2018 22:48 UTC
198 points
35 comments3 min readLW link1 review

Harry Pot­ter and the Method of En­tropy 1 [LessWrong ver­sion]

habryka31 Mar 2018 20:38 UTC
6 points
0 comments3 min readLW link

Harry Pot­ter and the Method of Entropy

alkjash31 Mar 2018 20:10 UTC
11 points
12 comments1 min readLW link
(radimentary.wordpress.com)

Salience

Tueskes31 Mar 2018 19:52 UTC
6 points
1 comment4 min readLW link

Op­por­tu­ni­ties for in­di­vi­d­ual donors in AI safety

Alex Flint31 Mar 2018 18:37 UTC
30 points
3 comments11 min readLW link

Time in Ma­chine Metaethics

Razmęk Massaräinen31 Mar 2018 15:02 UTC
2 points
1 comment6 min readLW link

Nice Things

Zvi31 Mar 2018 12:30 UTC
14 points
0 comments2 min readLW link
(thezvi.wordpress.com)

Re­duc­ing Agents: When ab­strac­tions break

Hazard31 Mar 2018 0:03 UTC
13 points
10 comments8 min readLW link

Syd­ney Ra­tion­al­ity Dojo—April

luminosity30 Mar 2018 14:18 UTC
1 point
0 comments1 min readLW link

The Eter­nal Grind

Zvi30 Mar 2018 11:40 UTC
10 points
1 comment17 min readLW link
(thezvi.wordpress.com)

Re­ward hack­ing and Good­hart’s law by evolu­tion­ary algorithms

Jan_Kulveit30 Mar 2018 7:57 UTC
18 points
5 comments1 min readLW link
(arxiv.org)

Ra­tion­al­ist Lent is over

Qiaochu_Yuan30 Mar 2018 5:57 UTC
20 points
16 comments1 min readLW link