Sad!

nwsFeb 7, 2019, 7:42 PM
−1 points
6 comments1 min readLW link

Open Thread Fe­bru­ary 2019

ryan_bFeb 7, 2019, 6:00 PM
19 points
19 comments1 min readLW link

EA grants available (to in­di­vi­d­u­als)

Jameson QuinnFeb 7, 2019, 3:17 PM
34 points
8 comments3 min readLW link

X-risks are a tragedies of the commons

David Scott Krueger (formerly: capybaralet)Feb 7, 2019, 2:48 AM
9 points
19 comments1 min readLW link

Do Science and Tech­nol­ogy Lead to a Fall in Hu­man Values?

jayshi19Feb 7, 2019, 1:53 AM
1 point
1 comment1 min readLW link
(techandhumanity.com)

Test Cases for Im­pact Reg­u­lari­sa­tion Methods

DanielFilanFeb 6, 2019, 9:50 PM
72 points
5 comments13 min readLW link
(danielfilan.com)

A ten­ta­tive solu­tion to a cer­tain mytholog­i­cal beast of a problem

Edward KnoxFeb 6, 2019, 8:42 PM
−11 points
9 comments1 min readLW link

AI Align­ment is Alchemy.

JeevanFeb 6, 2019, 8:32 PM
−9 points
20 comments1 min readLW link

My use of the phrase “Su­per-Hu­man Feed­back”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:11 PM
13 points
0 comments1 min readLW link

Thoughts on Ben Garfinkel’s “How sure are we about this AI stuff?”

David Scott Krueger (formerly: capybaralet)Feb 6, 2019, 7:09 PM
25 points
17 comments1 min readLW link

Show LW: (video) how to re­mem­ber ev­ery­thing you learn

ArthurLidiaFeb 6, 2019, 7:02 PM
3 points
0 comments1 min readLW link

Does the EA com­mu­nity do “ba­sic sci­ence” grants? How do I get one?

Jameson QuinnFeb 6, 2019, 6:10 PM
7 points
6 comments1 min readLW link

Is the World Get­ting Bet­ter? A brief sum­mary of re­cent debate

ErickBallFeb 6, 2019, 5:38 PM
35 points
8 comments2 min readLW link
(capx.co)

Se­cu­rity amplification

paulfchristianoFeb 6, 2019, 5:28 PM
21 points
2 comments13 min readLW link

Align­ment Newslet­ter #44

Rohin ShahFeb 6, 2019, 8:30 AM
18 points
0 comments9 min readLW link
(mailchi.mp)

South Bay Meetup March 2nd

David FriedmanFeb 6, 2019, 6:48 AM
1 point
0 commentsLW link

[Question] If Ra­tion­al­ity can be likened to a ‘Mar­tial Art’, what would be the Forms?

Bae's TheoremFeb 6, 2019, 5:48 AM
21 points
10 comments1 min readLW link

Com­plex­ity Penalties in Statis­ti­cal Learning

michael_hFeb 6, 2019, 4:13 AM
31 points
3 comments6 min readLW link

Au­to­mated Nomic Game 2

jefftkFeb 5, 2019, 10:11 PM
19 points
2 comments2 min readLW link

Should we bait crim­i­nals us­ing clones ?

Aël ChappuitFeb 5, 2019, 9:13 PM
−23 points
3 comments1 min readLW link

De­scribing things: par­si­mony, fruit­ful­ness, and adapt­abil­ity

Mary ChernyshenkoFeb 5, 2019, 8:59 PM
1 point
0 comments1 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie SteinerFeb 5, 2019, 7:34 PM
41 points
20 comments3 min readLW link

When to use quantilization

RyanCareyFeb 5, 2019, 5:17 PM
65 points
5 comments4 min readLW link

(notes on) Policy Desider­ata for Su­per­in­tel­li­gent AI: A Vec­tor Field Approach

Ben PaceFeb 4, 2019, 10:08 PM
43 points
5 comments7 min readLW link

SSC Paris Meetup, 09/​02/​18

fbretonFeb 4, 2019, 7:54 PM
1 point
0 comments1 min readLW link

Jan­uary 2019 gw­ern.net newsletter

gwernFeb 4, 2019, 3:53 PM
15 points
0 comments1 min readLW link
(www.gwern.net)

(Why) Does the Basilisk Ar­gu­ment fail?

LookingforyourlogicFeb 3, 2019, 11:50 PM
0 points
11 comments2 min readLW link

Con­struct­ing Goodhart

johnswentworthFeb 3, 2019, 9:59 PM
29 points
10 comments3 min readLW link

Con­clu­sion to the se­quence on value learning

Rohin ShahFeb 3, 2019, 9:05 PM
51 points
20 comments5 min readLW link

AI Safety Pr­ereq­ui­sites Course: Re­vamp and New Lessons

philip_bFeb 3, 2019, 9:04 PM
24 points
5 comments1 min readLW link

[Question] What are some of bizarre the­o­ries based on an­thropic rea­son­ing?

Dr. JamchieFeb 3, 2019, 6:48 PM
21 points
13 comments1 min readLW link

Ra­tion­al­ity: What’s the point?

HazardFeb 3, 2019, 4:34 PM
12 points
11 comments1 min readLW link

Quan­tify­ing Hu­man Suffer­ing and “Every­day Suffer­ing”

willfranksFeb 3, 2019, 1:07 PM
7 points
3 comments1 min readLW link

[Question] How to stay con­cen­trated for a long pe­riod of time?

infinickelFeb 3, 2019, 5:24 AM
6 points
15 comments1 min readLW link

How to no­tice be­ing mind-hacked

ShmiFeb 2, 2019, 11:13 PM
18 points
22 comments2 min readLW link

De­pres­sion philosophizing

aaqFeb 2, 2019, 10:54 PM
6 points
2 comments1 min readLW link

LessWrong DC: Metameetup

rusalkiiFeb 2, 2019, 6:50 PM
1 point
0 comments1 min readLW link

SSC At­lanta Meetup

Steve FrenchFeb 2, 2019, 3:11 AM
2 points
0 comments1 min readLW link

[Question] How does Gra­di­ent Des­cent In­ter­act with Good­hart?

Scott GarrabrantFeb 2, 2019, 12:14 AM
68 points
19 comments4 min readLW link

Philadelphia SSC Meetup

MajusculeFeb 1, 2019, 11:51 PM
1 point
0 comments1 min readLW link

STRUCTURE: Real­ity and ra­tio­nal best practice

HazardFeb 1, 2019, 11:51 PM
5 points
2 comments1 min readLW link

An At­tempt To Ex­plain No-Self In Sim­ple Terms

Justin VriendFeb 1, 2019, 11:50 PM
1 point
0 comments3 min readLW link

STRUCTURE: How the So­cial Affects your rationality

HazardFeb 1, 2019, 11:35 PM
0 points
0 comments1 min readLW link

STRUCTURE: A Crash Course in Your Brain

HazardFeb 1, 2019, 11:17 PM
6 points
4 comments1 min readLW link

Fe­bru­ary Nashville SSC Meetup

Dude McDudeFeb 1, 2019, 10:36 PM
1 point
0 comments1 min readLW link

[Question] What kind of in­for­ma­tion would serve as the best ev­i­dence for re­solv­ing the de­bate of whether a cen­trist or leftist Demo­cratic nom­i­nee is like­lier to take the White House in 2020?

Evan_Gaensbauer1 Feb 2019 18:40 UTC
10 points
10 comments3 min readLW link

Ur­gent & im­por­tant: How (not) to do your to-do list

bfinn1 Feb 2019 17:44 UTC
51 points
20 comments13 min readLW link

Who wants to be a Million­aire?

Bucky1 Feb 2019 14:02 UTC
29 points
1 comment11 min readLW link

What is Wrong?

Inyuki1 Feb 2019 12:02 UTC
1 point
2 comments2 min readLW link

Drexler on AI Risk

PeterMcCluskey1 Feb 2019 5:11 UTC
35 points
10 comments9 min readLW link
(www.bayesianinvestor.com)