Ad­jec­tives from the Fu­ture: The Dangers of Re­sult-based Descriptions

Pradeep_KumarAug 11, 2019, 7:19 PM
19 points
8 comments11 min readLW link

[Question] Could we solve this email mess if we all moved to paid emails?

Bird ConceptAug 11, 2019, 4:31 PM
29 points
50 comments4 min readLW link

AI Safety Read­ing Group

Søren ElverlinAug 11, 2019, 9:01 AM
16 points
8 comments1 min readLW link

[Question] Does hu­man choice have to be tran­si­tive in or­der to be ra­tio­nal/​con­sis­tent?

jmhAug 11, 2019, 1:49 AM
9 points
6 comments1 min readLW link

Di­ana Fleischman and Ge­offrey Miller—Au­di­ence Q&A

Jacob FalkovichAug 10, 2019, 10:37 PM
38 points
6 comments9 min readLW link

In­tran­si­tive Prefer­ences You Can’t Pump

zulupineappleAug 9, 2019, 11:10 PM
0 points
2 comments1 min readLW link

Cat­e­go­rial prefer­ences and util­ity functions

DavidHolmesAug 9, 2019, 9:36 PM
10 points
6 comments5 min readLW link

[Question] What is the state of the ego de­ple­tion field?

Eli TyreAug 9, 2019, 8:30 PM
27 points
10 comments1 min readLW link

Why Gra­di­ents Van­ish and Explode

Matthew BarnettAug 9, 2019, 2:54 AM
25 points
9 comments3 min readLW link

AI Fore­cast­ing Dic­tionary (Fore­cast­ing in­fras­truc­ture, part 1)

Aug 8, 2019, 4:10 PM
50 points
0 comments5 min readLW link

[Question] Why do hu­mans not have built-in neu­ral i/​o chan­nels?

Richard_NgoAug 8, 2019, 1:09 PM
25 points
23 comments1 min readLW link

Which of these five AI al­ign­ment re­search pro­jects ideas are no good?

rmoehnAug 8, 2019, 7:17 AM
25 points
13 comments1 min readLW link

Cal­ibrat­ing With Cards

lifelonglearnerAug 8, 2019, 6:44 AM
32 points
3 comments3 min readLW link

[Question] Is there a source/​mar­ket for LW-re­lated t-shirts?

jooyousAug 8, 2019, 4:30 AM
8 points
3 comments1 min readLW link

Ver­ifi­ca­tion and Transparency

DanielFilanAug 8, 2019, 1:50 AM
35 points
6 comments2 min readLW link
(danielfilan.com)

Toy model piece #2: Com­bin­ing short and long range par­tial preferences

Stuart_ArmstrongAug 8, 2019, 12:11 AM
14 points
0 comments4 min readLW link

Four Ways An Im­pact Mea­sure Could Help Alignment

Matthew BarnettAug 8, 2019, 12:10 AM
21 points
1 comment9 min readLW link

Nashville Au­gust SSC Meetup

friedelcraftinessAug 7, 2019, 8:11 PM
1 point
0 comments1 min readLW link

In defense of Or­a­cle (“Tool”) AI research

Steven ByrnesAug 7, 2019, 7:14 PM
22 points
11 comments4 min readLW link

Help fore­cast study repli­ca­tion in this so­cial sci­ence pre­dic­tion market

rosiecamAug 7, 2019, 6:18 PM
29 points
3 comments1 min readLW link

[Question] Edit Nickname

Luigi LottiAug 7, 2019, 5:42 PM
5 points
1 comment1 min readLW link

Self-Su­per­vised Learn­ing and AGI Safety

Steven ByrnesAug 7, 2019, 2:21 PM
30 points
9 comments12 min readLW link

Emo­tions are not beliefs

Chris_LeongAug 7, 2019, 6:27 AM
25 points
2 comments2 min readLW link

Un­der­stand­ing Re­cent Im­pact Measures

Matthew BarnettAug 7, 2019, 4:57 AM
16 points
6 comments7 min readLW link

[Site Up­date] Be­hind the scenes data-layer and caching improvements

habrykaAug 7, 2019, 12:49 AM
23 points
3 comments1 min readLW link

Pro­ject Pro­posal: Con­sid­er­a­tions for trad­ing off ca­pa­bil­ities and safety im­pacts of AI research

David Scott Krueger (formerly: capybaralet)Aug 6, 2019, 10:22 PM
25 points
11 comments2 min readLW link

Subagents, neu­ral Tur­ing ma­chines, thought se­lec­tion, and blindspots

Kaj_SotalaAug 6, 2019, 9:15 PM
87 points
3 comments12 min readLW link

[Question] Per­cent re­duc­tion of gun-re­lated deaths by color of gun.

Gunnar_ZarnckeAug 6, 2019, 8:28 PM
8 points
11 comments1 min readLW link

New pa­per: Cor­rigi­bil­ity with Utility Preservation

Koen.HoltmanAug 6, 2019, 7:04 PM
44 points
11 comments2 min readLW link

Weak foun­da­tion of de­ter­minism analysis

aiiixiiiAug 6, 2019, 7:03 PM
14 points
54 comments3 min readLW link

Trauma, Med­i­ta­tion, and a Cool Scar

Logan RiggsAug 6, 2019, 4:17 PM
102 points
17 comments5 min readLW link1 review

[Question] Why is the ni­tro­gen cy­cle so un­der-em­pha­sized com­pared to cli­mate change

ChristianKlAug 6, 2019, 9:25 AM
15 points
4 comments1 min readLW link

[Question] How would a per­son go about start­ing a geo­eng­ineer­ing startup?

Pee DoomAug 6, 2019, 7:34 AM
11 points
5 commentsLW link

Sta­tus 451 on Di­ag­no­sis: Rus­sell Aphasia

Zack_M_DavisAug 6, 2019, 4:43 AM
48 points
1 comment1 min readLW link
(status451.com)

Searle’s Chi­nese Room and the Mean­ing of Meaning

Jimdrix_HendriAug 6, 2019, 4:09 AM
0 points
4 comments2 min readLW link

[Question] What are the best re­sources for ex­am­in­ing the ev­i­dence for an­thro­pogenic cli­mate change?

Matthew BarnettAug 6, 2019, 2:53 AM
10 points
8 comments1 min readLW link

A Sur­vey of Early Im­pact Measures

Matthew BarnettAug 6, 2019, 1:22 AM
29 points
0 comments8 min readLW link

Prefer­ences as an (in­stinc­tive) stance

Stuart_ArmstrongAug 6, 2019, 12:43 AM
18 points
4 comments4 min readLW link

[Question] How to nav­i­gate through con­tra­dic­tory (health/​fit­ness) ad­vice?

SherrinfordAug 5, 2019, 8:58 PM
14 points
7 comments1 min readLW link

My recom­men­da­tions for grat­i­tude exercises

MaxCarpendaleAug 5, 2019, 7:04 PM
40 points
3 comments5 min readLW link

[AN #61] AI policy and gov­er­nance, from two peo­ple in the field

Rohin ShahAug 5, 2019, 5:00 PM
12 points
2 comments9 min readLW link
(mailchi.mp)

DC SSC Meetup

Robi RahmanAug 5, 2019, 4:19 PM
2 points
0 comments1 min readLW link

DC SSC Meetup

Robi RahmanAug 5, 2019, 4:16 PM
2 points
0 comments1 min readLW link

[Question] Do you do weekly or daily re­views? What are they like?

benwrAug 5, 2019, 1:23 AM
23 points
8 comments1 min readLW link

[Question] Can we re­ally pre­vent all warm­ing for less than 10B$ with the mostly side-effect free geo­eng­ineer­ing tech­nique of Marine Cloud Bright­en­ing?

mako yassAug 5, 2019, 12:12 AM
94 points
55 comments2 min readLW link

[Question] [Re­source Re­quest] What’s the se­quence post which ex­plains you should con­tinue to be­lieve things about a par­ti­cle mov­ing that’s mov­ing be­yond your abil­ity to ob­serve it?

RubyAug 4, 2019, 10:31 PM
6 points
4 comments1 min readLW link

AI Align­ment Open Thread Au­gust 2019

habrykaAug 4, 2019, 10:09 PM
35 points
96 comments1 min readLW link

Where do analo­gies break down?

neilkakkarAug 4, 2019, 9:23 PM
2 points
0 comments5 min readLW link
(neilkakkar.com)

In­ver­sion of the­o­rems into defi­ni­tions when generalizing

riceissaAug 4, 2019, 5:44 PM
25 points
3 comments5 min readLW link

Cephaloponderings

Jacob FalkovichAug 4, 2019, 4:45 PM
39 points
4 comments7 min readLW link