Com­bat vs Nur­ture & Meta-Contrarianism

abramdemskiJan 10, 2019, 11:17 PM
66 points
12 comments4 min readLW link

Book Recom­men­da­tions: An Every­one Cul­ture and Mo­ral Mazes

sarahconstantinJan 10, 2019, 9:40 PM
45 points
13 comments3 min readLW link
(srconstantin.wordpress.com)

LW/​SSC Mixer

Bae's TheoremJan 10, 2019, 5:28 PM
4 points
2 comments1 min readLW link

What is nar­row value learn­ing?

Rohin ShahJan 10, 2019, 7:05 AM
23 points
3 comments2 min readLW link

LW Up­date 2019-1-09 – Ques­tion Up­dates, UserPro­file Sorting

RaemonJan 9, 2019, 10:34 PM
29 points
2 comments1 min readLW link

[Question] La­tex rendering

Stuart_ArmstrongJan 9, 2019, 10:32 PM
10 points
6 comments1 min readLW link

Open Thread Jan­uary 2019

RaemonJan 9, 2019, 8:25 PM
23 points
54 comments1 min readLW link

No sur­jec­tion onto func­tion space for man­i­fold X

Stuart_ArmstrongJan 9, 2019, 6:07 PM
21 points
0 comments6 min readLW link

The 3 Books Tech­nique for Learn­ing a New Skilll

Matt GoldenbergJan 9, 2019, 12:45 PM
211 points
48 comments2 min readLW link

[Question] What are ques­tions?

Pee DoomJan 9, 2019, 7:37 AM
35 points
17 comments2 min readLW link

Book Re­view: The Struc­ture Of Scien­tific Revolutions

Scott AlexanderJan 9, 2019, 7:10 AM
104 points
30 comments19 min readLW link1 review
(slatestarcodex.com)

mindlevelup 3 Year Review

lifelonglearnerJan 9, 2019, 6:36 AM
18 points
0 comments10 min readLW link

AlphaGo Zero and ca­pa­bil­ity amplification

paulfchristianoJan 9, 2019, 12:40 AM
33 points
23 comments2 min readLW link

Pre­dic­tors as Agents

intersticeJan 8, 2019, 8:50 PM
10 points
6 comments3 min readLW link

Align­ment Newslet­ter #40

Rohin ShahJan 8, 2019, 8:10 PM
21 points
2 comments5 min readLW link
(mailchi.mp)

LessWrong Is­rael, Jan.15

JoshuaFoxJan 8, 2019, 7:47 PM
7 points
1 commentLW link

What emo­tions would AIs need to feel?

Stuart_ArmstrongJan 8, 2019, 3:09 PM
15 points
6 comments2 min readLW link

EA Bris­tol June 2019 Social

thegreatnickJan 8, 2019, 12:11 PM
1 point
0 comments1 min readLW link

Refram­ing Su­per­in­tel­li­gence: Com­pre­hen­sive AI Ser­vices as Gen­eral Intelligence

Rohin ShahJan 8, 2019, 7:12 AM
122 points
77 comments5 min readLW link2 reviews
(www.fhi.ox.ac.uk)

[Question] Which ap­proach is most promis­ing for al­igned AGI?

Chris_LeongJan 8, 2019, 2:19 AM
5 points
4 comments1 min readLW link

Se­quence in­tro­duc­tion: non-agent and mul­ti­a­gent mod­els of mind

Kaj_SotalaJan 7, 2019, 2:12 PM
125 points
16 comments7 min readLW link1 review

Op­ti­miz­ing for Sto­ries (vs Op­ti­miz­ing Real­ity)

RubyJan 7, 2019, 8:03 AM
43 points
11 comments7 min readLW link

AI safety with­out goal-di­rected behavior

Rohin ShahJan 7, 2019, 7:48 AM
68 points
15 comments4 min readLW link

Zero

elJan 7, 2019, 3:28 AM
−13 points
0 comments1 min readLW link

On Ab­stract Systems

Chris_LeongJan 6, 2019, 11:41 PM
14 points
1 comment1 min readLW link

Disad­van­tages of Card Rebalancing

ZviJan 6, 2019, 11:30 PM
32 points
5 comments18 min readLW link
(thezvi.wordpress.com)

EA Bris­tol Apr 2019 Social

thegreatnickJan 6, 2019, 4:24 PM
1 point
0 comments1 min readLW link

EA Bris­tol Mar 2019 Social

thegreatnickJan 6, 2019, 4:19 PM
1 point
0 comments1 min readLW link

EA Bris­tol Feb 2019

thegreatnickJan 6, 2019, 4:13 PM
1 point
0 comments1 min readLW link

EA Bris­tol Jan 2019 So­cial

thegreatnickJan 6, 2019, 4:10 PM
1 point
0 comments1 min readLW link

Imi­ta­tion learn­ing con­sid­ered un­safe?

David Scott Krueger (formerly: capybaralet)Jan 6, 2019, 3:48 PM
20 points
11 comments1 min readLW link

Su­per­vis­ing strong learn­ers by am­plify­ing weak experts

paulfchristianoJan 6, 2019, 7:00 AM
29 points
1 comment1 min readLW link
(arxiv.org)

Cam­bridge SlateS­tarCodex Meetup

NoSignalNoNoiseJan 6, 2019, 5:11 AM
6 points
1 comment1 min readLW link

Failures of UDT-AIXI, Part 1: Im­proper Randomizing

DiffractorJan 6, 2019, 3:53 AM
14 points
3 comments4 min readLW link

[Question] Does anti-malaria char­ity de­stroy the lo­cal anti-malaria in­dus­try?

ViliamJan 5, 2019, 7:04 PM
61 points
16 comments1 min readLW link

Will hu­mans build goal-di­rected agents?

Rohin ShahJan 5, 2019, 1:33 AM
61 points
43 comments5 min readLW link

I want it my way!

nickhayesJan 4, 2019, 6:08 PM
39 points
2 comments9 min readLW link

Towards no-math, graph­i­cal in­struc­tions for pre­dic­tion markets

ryan_bJan 4, 2019, 4:39 PM
30 points
14 comments2 min readLW link

Two More De­ci­sion The­ory Prob­lems for Humans

Wei DaiJan 4, 2019, 9:00 AM
56 points
14 comments2 min readLW link

[Question] What are good ML/​AI re­lated pre­dic­tion /​ cal­ibra­tion ques­tions for 2019?

james_tJan 4, 2019, 2:40 AM
19 points
4 comments2 min readLW link

[Question] What is a rea­son­able out­side view for the fate of so­cial move­ments?

Bird ConceptJan 4, 2019, 12:21 AM
33 points
27 comments1 min readLW link

[Question] Log­i­cal in­duc­tors in mul­ti­stable situ­a­tions.

Donald HobsonJan 3, 2019, 11:56 PM
8 points
4 comments1 min readLW link

Jan­uary At­lanta SSC Meetup

Steve FrenchJan 3, 2019, 2:02 PM
1 point
0 commentsLW link

“Trav­el­ing Sales­man”

telmsJan 3, 2019, 9:51 AM
5 points
2 comments1 min readLW link

Ap­plied Ra­tion­al­ity Work­shop Cologne, Germany

mschonsJan 3, 2019, 9:16 AM
18 points
0 comments2 min readLW link

Bay Area SSC Meetup (spe­cial guest Steve Hsu)

Scott AlexanderJan 3, 2019, 3:02 AM
27 points
0 comments1 min readLW link

Jan­uary Tri­an­gle SSC Meetup

willbobagginsJan 3, 2019, 1:31 AM
1 point
0 comments1 min readLW link

[Question] What’s the best way for me to im­prove my English pro­noun­ci­a­tion?

ChristianKlJan 2, 2019, 11:49 PM
14 points
14 comments1 min readLW link

Syd­ney Ra­tion­al­ity Dojo—December

Nicky FeyJan 2, 2019, 11:15 PM
3 points
0 comments1 min readLW link

Syd­ney Ra­tion­al­ity Dojo—November

Cpt. BlJan 2, 2019, 11:03 PM
1 point
0 comments1 min readLW link