Min­i­mize Use of Stan­dard In­ter­net Food Delivery

ZviFeb 10, 2019, 7:50 PM
−18 points
28 comments2 min readLW link
(thezvi.wordpress.com)

Propo­si­tional Logic, Syn­tac­tic Implication

Donald HobsonFeb 10, 2019, 6:12 PM
5 points
1 comment1 min readLW link

Fight­ing the al­lure of de­pres­sive realism

aaqFeb 10, 2019, 4:46 PM
19 points
2 comments3 min readLW link

Struc­tured Con­cur­rency Cross-lan­guage Forum

Martin SustrikFeb 10, 2019, 9:20 AM
12 points
0 comments1 min readLW link
(250bpm.com)

Prob­a­bil­ity space has 2 metrics

Donald HobsonFeb 10, 2019, 12:28 AM
89 points
11 comments1 min readLW link

Some Thoughts on Metaphilosophy

Wei DaiFeb 10, 2019, 12:28 AM
82 points
42 comments4 min readLW link

The Ar­gu­ment from Philo­soph­i­cal Difficulty

Wei DaiFeb 10, 2019, 12:28 AM
66 points
31 comments1 min readLW link

Dojo on stress

EloFeb 9, 2019, 10:49 PM
13 points
0 comments4 min readLW link

[Question] When should we ex­pect the ed­u­ca­tion bub­ble to pop? How can we short it?

Bird ConceptFeb 9, 2019, 9:39 PM
35 points
12 comments1 min readLW link

The Cake is a Lie, Part 2.

IncomprehensibleManeFeb 9, 2019, 8:07 PM
−27 points
7 comments9 min readLW link

The Case for a Big­ger Audience

John_MaxwellFeb 9, 2019, 7:22 AM
68 points
58 comments2 min readLW link

[Question] Can some­one de­sign this Google Sheets bug list tem­plate for me?

Bae's TheoremFeb 9, 2019, 6:55 AM
4 points
4 comments1 min readLW link

Re­in­force­ment Learn­ing in the Iter­ated Am­plifi­ca­tion Framework

William_SFeb 9, 2019, 12:56 AM
25 points
12 comments4 min readLW link

HCH is not just Me­chan­i­cal Turk

William_SFeb 9, 2019, 12:46 AM
42 points
6 comments3 min readLW link

Friendly SSC and LW meetup

Sean AubinFeb 9, 2019, 12:20 AM
1 point
0 comments1 min readLW link

The Ham­ming Question

RaemonFeb 8, 2019, 7:34 PM
64 points
38 comments3 min readLW link

Make an ap­point­ment with your saner self

MalcolmOceanFeb 8, 2019, 5:05 AM
28 points
0 comments4 min readLW link

[Question] What is learn­ing?

Pee DoomFeb 8, 2019, 3:18 AM
11 points
2 comments1 min readLW link

Is this how I choose to show up?

EloFeb 8, 2019, 12:30 AM
5 points
3 comments5 min readLW link

Sad!

nwsFeb 7, 2019, 7:42 PM
−1 points
6 comments1 min readLW link

Open Thread Fe­bru­ary 2019

ryan_bFeb 7, 2019, 6:00 PM
19 points
19 comments1 min readLW link

EA grants available (to in­di­vi­d­u­als)

Jameson QuinnFeb 7, 2019, 3:17 PM
34 points
8 comments3 min readLW link

X-risks are a tragedies of the commons

David Scott Krueger (formerly: capybaralet)Feb 7, 2019, 2:48 AM
9 points
19 comments1 min readLW link

Do Science and Tech­nol­ogy Lead to a Fall in Hu­man Values?

jayshi19Feb 7, 2019, 1:53 AM
1 point
1 comment1 min readLW link
(techandhumanity.com)

Test Cases for Im­pact Reg­u­lari­sa­tion Methods

DanielFilanFeb 6, 2019, 9:50 PM
72 points
5 comments13 min readLW link
(danielfilan.com)

A ten­ta­tive solu­tion to a cer­tain mytholog­i­cal beast of a problem

Edward KnoxFeb 6, 2019, 8:42 PM
−11 points
9 comments1 min readLW link

AI Align­ment is Alchemy.

JeevanFeb 6, 2019, 8:32 PM
−9 points
20 comments1 min readLW link

My use of the phrase “Su­per-Hu­man Feed­back”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:11 PM
13 points
0 comments1 min readLW link

Thoughts on Ben Garfinkel’s “How sure are we about this AI stuff?”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:09 PM
25 points
17 comments1 min readLW link

Show LW: (video) how to re­mem­ber ev­ery­thing you learn

ArthurLidiaFeb 6, 2019, 7:02 PM
3 points
0 comments1 min readLW link

Does the EA com­mu­nity do “ba­sic sci­ence” grants? How do I get one?

Jameson QuinnFeb 6, 2019, 6:10 PM
7 points
6 comments1 min readLW link

Is the World Get­ting Bet­ter? A brief sum­mary of re­cent debate

ErickBallFeb 6, 2019, 5:38 PM
35 points
8 comments2 min readLW link
(capx.co)

Se­cu­rity amplification

paulfchristianoFeb 6, 2019, 5:28 PM
21 points
2 comments13 min readLW link

Align­ment Newslet­ter #44

Rohin ShahFeb 6, 2019, 8:30 AM
18 points
0 comments9 min readLW link
(mailchi.mp)

South Bay Meetup March 2nd

David FriedmanFeb 6, 2019, 6:48 AM
1 point
0 commentsLW link

[Question] If Ra­tion­al­ity can be likened to a ‘Mar­tial Art’, what would be the Forms?

Bae's TheoremFeb 6, 2019, 5:48 AM
21 points
10 comments1 min readLW link

Com­plex­ity Penalties in Statis­ti­cal Learning

michael_hFeb 6, 2019, 4:13 AM
31 points
3 comments6 min readLW link

Au­to­mated Nomic Game 2

jefftkFeb 5, 2019, 10:11 PM
19 points
2 comments2 min readLW link

Should we bait crim­i­nals us­ing clones ?

Aël ChappuitFeb 5, 2019, 9:13 PM
−23 points
3 comments1 min readLW link

De­scribing things: par­si­mony, fruit­ful­ness, and adapt­abil­ity

Mary ChernyshenkoFeb 5, 2019, 8:59 PM
1 point
0 comments1 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie SteinerFeb 5, 2019, 7:34 PM
41 points
20 comments3 min readLW link

When to use quantilization

RyanCareyFeb 5, 2019, 5:17 PM
65 points
5 comments4 min readLW link

(notes on) Policy Desider­ata for Su­per­in­tel­li­gent AI: A Vec­tor Field Approach

Ben PaceFeb 4, 2019, 10:08 PM
43 points
5 comments7 min readLW link

SSC Paris Meetup, 09/​02/​18

fbretonFeb 4, 2019, 7:54 PM
1 point
0 comments1 min readLW link

Jan­uary 2019 gw­ern.net newsletter

gwernFeb 4, 2019, 3:53 PM
15 points
0 comments1 min readLW link
(www.gwern.net)

(Why) Does the Basilisk Ar­gu­ment fail?

LookingforyourlogicFeb 3, 2019, 11:50 PM
0 points
11 comments2 min readLW link

Con­struct­ing Goodhart

johnswentworthFeb 3, 2019, 9:59 PM
29 points
10 comments3 min readLW link

Con­clu­sion to the se­quence on value learning

Rohin ShahFeb 3, 2019, 9:05 PM
51 points
20 comments5 min readLW link

AI Safety Pr­ereq­ui­sites Course: Re­vamp and New Lessons

philip_bFeb 3, 2019, 9:04 PM
24 points
5 comments1 min readLW link

[Question] What are some of bizarre the­o­ries based on an­thropic rea­son­ing?

Dr. JamchieFeb 3, 2019, 6:48 PM
21 points
13 comments1 min readLW link