Au­to­mated Nomic Game 2

jefftkFeb 5, 2019, 10:11 PM
19 points
2 comments2 min readLW link

Should we bait crim­i­nals us­ing clones ?

Aël ChappuitFeb 5, 2019, 9:13 PM
−23 points
3 comments1 min readLW link

De­scribing things: par­si­mony, fruit­ful­ness, and adapt­abil­ity

Mary ChernyshenkoFeb 5, 2019, 8:59 PM
1 point
0 comments1 min readLW link

Philos­o­phy as low-en­ergy approximation

Charlie SteinerFeb 5, 2019, 7:34 PM
41 points
20 comments3 min readLW link

When to use quantilization

RyanCareyFeb 5, 2019, 5:17 PM
65 points
5 comments4 min readLW link

(notes on) Policy Desider­ata for Su­per­in­tel­li­gent AI: A Vec­tor Field Approach

Ben PaceFeb 4, 2019, 10:08 PM
43 points
5 comments7 min readLW link

SSC Paris Meetup, 09/​02/​18

fbretonFeb 4, 2019, 7:54 PM
1 point
0 comments1 min readLW link

Jan­uary 2019 gw­ern.net newsletter

gwernFeb 4, 2019, 3:53 PM
15 points
0 comments1 min readLW link
(www.gwern.net)

(Why) Does the Basilisk Ar­gu­ment fail?

LookingforyourlogicFeb 3, 2019, 11:50 PM
0 points
11 comments2 min readLW link

Con­struct­ing Goodhart

johnswentworthFeb 3, 2019, 9:59 PM
29 points
10 comments3 min readLW link

Con­clu­sion to the se­quence on value learning

Rohin ShahFeb 3, 2019, 9:05 PM
51 points
20 comments5 min readLW link

AI Safety Pr­ereq­ui­sites Course: Re­vamp and New Lessons

philip_bFeb 3, 2019, 9:04 PM
24 points
5 comments1 min readLW link

[Question] What are some of bizarre the­o­ries based on an­thropic rea­son­ing?

Dr. JamchieFeb 3, 2019, 6:48 PM
21 points
13 comments1 min readLW link

Ra­tion­al­ity: What’s the point?

HazardFeb 3, 2019, 4:34 PM
12 points
11 comments1 min readLW link

Quan­tify­ing Hu­man Suffer­ing and “Every­day Suffer­ing”

willfranksFeb 3, 2019, 1:07 PM
7 points
3 comments1 min readLW link

[Question] How to stay con­cen­trated for a long pe­riod of time?

infinickelFeb 3, 2019, 5:24 AM
6 points
15 comments1 min readLW link

How to no­tice be­ing mind-hacked

ShmiFeb 2, 2019, 11:13 PM
18 points
22 comments2 min readLW link

De­pres­sion philosophizing

aaqFeb 2, 2019, 10:54 PM
6 points
2 comments1 min readLW link

LessWrong DC: Metameetup

rusalkiiFeb 2, 2019, 6:50 PM
1 point
0 comments1 min readLW link

SSC At­lanta Meetup

Steve FrenchFeb 2, 2019, 3:11 AM
2 points
0 comments1 min readLW link

[Question] How does Gra­di­ent Des­cent In­ter­act with Good­hart?

Scott GarrabrantFeb 2, 2019, 12:14 AM
68 points
19 comments4 min readLW link

Philadelphia SSC Meetup

MajusculeFeb 1, 2019, 11:51 PM
1 point
0 comments1 min readLW link

STRUCTURE: Real­ity and ra­tio­nal best practice

HazardFeb 1, 2019, 11:51 PM
5 points
2 comments1 min readLW link

An At­tempt To Ex­plain No-Self In Sim­ple Terms

Justin VriendFeb 1, 2019, 11:50 PM
1 point
0 comments3 min readLW link

STRUCTURE: How the So­cial Affects your rationality

HazardFeb 1, 2019, 11:35 PM
0 points
0 comments1 min readLW link

STRUCTURE: A Crash Course in Your Brain

HazardFeb 1, 2019, 11:17 PM
6 points
4 comments1 min readLW link

Fe­bru­ary Nashville SSC Meetup

Dude McDudeFeb 1, 2019, 10:36 PM
1 point
0 comments1 min readLW link

[Question] What kind of in­for­ma­tion would serve as the best ev­i­dence for re­solv­ing the de­bate of whether a cen­trist or leftist Demo­cratic nom­i­nee is like­lier to take the White House in 2020?

Evan_GaensbauerFeb 1, 2019, 6:40 PM
10 points
10 comments3 min readLW link

Ur­gent & im­por­tant: How (not) to do your to-do list

bfinnFeb 1, 2019, 5:44 PM
51 points
20 comments13 min readLW link

Who wants to be a Million­aire?

BuckyFeb 1, 2019, 2:02 PM
29 points
1 comment11 min readLW link

What is Wrong?

InyukiFeb 1, 2019, 12:02 PM
1 point
2 comments2 min readLW link

Drexler on AI Risk

PeterMcCluskeyFeb 1, 2019, 5:11 AM
35 points
10 comments9 min readLW link
(www.bayesianinvestor.com)

Boundaries—A map and ter­ri­tory ex­per­i­ment. [post-ra­tio­nal­ity]

EloFeb 1, 2019, 2:08 AM
−18 points
14 comments2 min readLW link

[Question] Why is this util­i­tar­ian calcu­lus wrong? Or is it?

EconomicModelJan 31, 2019, 11:57 PM
15 points
21 comments1 min readLW link

Small hope for less bias and more practability

ArthurLidiaJan 31, 2019, 10:09 PM
0 points
0 comments1 min readLW link

Reli­a­bil­ity am­plifi­ca­tion

paulfchristianoJan 31, 2019, 9:12 PM
24 points
3 comments7 min readLW link

Cam­bridge (UK) SSC meetup

thisheavenlyconjugationJan 31, 2019, 11:45 AM
1 point
0 comments1 min readLW link

The role of epistemic vs. aleatory un­cer­tainty in quan­tify­ing AI-Xrisk

David Scott Krueger (formerly: capybaralet)Jan 31, 2019, 6:13 AM
15 points
6 comments2 min readLW link

[Question] Ap­plied Ra­tion­al­ity pod­cast—feed­back?

Bae's TheoremJan 31, 2019, 1:46 AM
11 points
12 comments1 min readLW link

Wire­head­ing is in the eye of the beholder

Stuart_ArmstrongJan 30, 2019, 6:23 PM
26 points
10 comments1 min readLW link

Mas­culine Virtues

Jacob FalkovichJan 30, 2019, 4:03 PM
52 points
32 comments13 min readLW link

De­con­fus­ing Log­i­cal Counterfactuals

Chris_LeongJan 30, 2019, 3:13 PM
27 points
16 comments11 min readLW link

Book Tril­ogy Re­view: Re­mem­brance of Earth’s Past (The Three Body Prob­lem)

ZviJan 30, 2019, 1:10 AM
49 points
15 comments40 min readLW link
(thezvi.wordpress.com)

Align­ment Newslet­ter #43

Rohin ShahJan 29, 2019, 9:10 PM
14 points
2 comments13 min readLW link
(mailchi.mp)

The Ques­tion Of Perception

The ArkonJan 29, 2019, 8:59 PM
0 points
18 comments5 min readLW link

[Question] Which text­book would you recom­mend to learn de­ci­sion the­ory?

supermartingaleJan 29, 2019, 8:48 PM
27 points
6 comments1 min readLW link

Towards equil­ibria-break­ing methods

ryan_bJan 29, 2019, 4:19 PM
22 points
3 comments2 min readLW link

Can there be an in­de­scrib­able hel­l­world?

Stuart_ArmstrongJan 29, 2019, 3:00 PM
39 points
19 comments2 min readLW link

How much can value learn­ing be dis­en­tan­gled?

Stuart_ArmstrongJan 29, 2019, 2:17 PM
22 points
30 comments2 min readLW link

Tech­niques for op­ti­miz­ing worst-case performance

paulfchristianoJan 28, 2019, 9:29 PM
23 points
12 comments8 min readLW link