Philadelphia SSC Meetup

Majuscule1 Feb 2019 23:51 UTC
1 point
0 comments1 min readLW link

STRUCTURE: Real­ity and ra­tio­nal best practice

Hazard1 Feb 2019 23:51 UTC
5 points
2 comments1 min readLW link

An At­tempt To Ex­plain No-Self In Sim­ple Terms

Justin Vriend1 Feb 2019 23:50 UTC
1 point
0 comments3 min readLW link

STRUCTURE: How the So­cial Affects your rationality

Hazard1 Feb 2019 23:35 UTC
0 points
0 comments1 min readLW link

STRUCTURE: A Crash Course in Your Brain

Hazard1 Feb 2019 23:17 UTC
6 points
4 comments1 min readLW link

Fe­bru­ary Nashville SSC Meetup

Dude McDude1 Feb 2019 22:36 UTC
1 point
0 comments1 min readLW link

[Question] What kind of in­for­ma­tion would serve as the best ev­i­dence for re­solv­ing the de­bate of whether a cen­trist or leftist Demo­cratic nom­i­nee is like­lier to take the White House in 2020?

Evan_Gaensbauer1 Feb 2019 18:40 UTC
10 points
10 comments3 min readLW link

Ur­gent & im­por­tant: How (not) to do your to-do list

bfinn1 Feb 2019 17:44 UTC
50 points
20 comments13 min readLW link

Who wants to be a Million­aire?

Bucky1 Feb 2019 14:02 UTC
29 points
1 comment11 min readLW link

What is Wrong?

Inyuki1 Feb 2019 12:02 UTC
1 point
2 comments2 min readLW link

Drexler on AI Risk

PeterMcCluskey1 Feb 2019 5:11 UTC
35 points
10 comments9 min readLW link
(www.bayesianinvestor.com)

Boundaries—A map and ter­ri­tory ex­per­i­ment. [post-ra­tio­nal­ity]

Elo1 Feb 2019 2:08 UTC
−18 points
14 comments2 min readLW link

[Question] Why is this util­i­tar­ian calcu­lus wrong? Or is it?

EconomicModel31 Jan 2019 23:57 UTC
15 points
21 comments1 min readLW link

Small hope for less bias and more practability

ArthurLidia31 Jan 2019 22:09 UTC
0 points
0 comments1 min readLW link

Reli­a­bil­ity am­plifi­ca­tion

paulfchristiano31 Jan 2019 21:12 UTC
24 points
3 comments7 min readLW link

Cam­bridge (UK) SSC meetup

thisheavenlyconjugation31 Jan 2019 11:45 UTC
1 point
0 comments1 min readLW link

The role of epistemic vs. aleatory un­cer­tainty in quan­tify­ing AI-Xrisk

David Scott Krueger (formerly: capybaralet)31 Jan 2019 6:13 UTC
15 points
6 comments2 min readLW link

[Question] Ap­plied Ra­tion­al­ity pod­cast—feed­back?

Bae's Theorem31 Jan 2019 1:46 UTC
11 points
12 comments1 min readLW link

Wire­head­ing is in the eye of the beholder

Stuart_Armstrong30 Jan 2019 18:23 UTC
26 points
10 comments1 min readLW link

Mas­culine Virtues

Jacob Falkovich30 Jan 2019 16:03 UTC
52 points
32 comments13 min readLW link

De­con­fus­ing Log­i­cal Counterfactuals

Chris_Leong30 Jan 2019 15:13 UTC
27 points
16 comments11 min readLW link

Book Tril­ogy Re­view: Re­mem­brance of Earth’s Past (The Three Body Prob­lem)

Zvi30 Jan 2019 1:10 UTC
48 points
15 comments40 min readLW link
(thezvi.wordpress.com)

Align­ment Newslet­ter #43

Rohin Shah29 Jan 2019 21:10 UTC
14 points
2 comments13 min readLW link
(mailchi.mp)

The Ques­tion Of Perception

The Arkon29 Jan 2019 20:59 UTC
0 points
18 comments5 min readLW link

[Question] Which text­book would you recom­mend to learn de­ci­sion the­ory?

supermartingale29 Jan 2019 20:48 UTC
27 points
6 comments1 min readLW link

Towards equil­ibria-break­ing methods

ryan_b29 Jan 2019 16:19 UTC
22 points
3 comments2 min readLW link

Can there be an in­de­scrib­able hel­l­world?

Stuart_Armstrong29 Jan 2019 15:00 UTC
39 points
19 comments2 min readLW link

How much can value learn­ing be dis­en­tan­gled?

Stuart_Armstrong29 Jan 2019 14:17 UTC
22 points
30 comments2 min readLW link

Tech­niques for op­ti­miz­ing worst-case performance

paulfchristiano28 Jan 2019 21:29 UTC
23 points
12 comments8 min readLW link

[Link] Did AlphaS­tar just click faster?

aogara28 Jan 2019 20:23 UTC
4 points
14 comments1 min readLW link

“Gift­ed­ness” and Ge­nius, Cru­cial Differences

ArthurLidia28 Jan 2019 20:22 UTC
6 points
0 comments29 min readLW link

Quan­tum Neu­ral Net and You

ScrubbyBubbles28 Jan 2019 18:42 UTC
1 point
4 comments1 min readLW link

A small ex­am­ple of one-step hypotheticals

Stuart_Armstrong28 Jan 2019 16:12 UTC
14 points
0 comments2 min readLW link

[Question] How would one go about defin­ing the ideal per­son­al­ity com­pat­i­bil­ity test?

digitalcaffeine28 Jan 2019 3:02 UTC
1 point
4 comments1 min readLW link

Solomonoff in­duc­tion and be­lief in God

Berkeley Beetle28 Jan 2019 3:01 UTC
0 points
3 comments1 min readLW link
(randalrauser.com)

Prac­ti­cal Con­sid­er­a­tions Re­gard­ing Poli­ti­cal Polarization

joshuabecker27 Jan 2019 22:26 UTC
2 points
0 comments1 min readLW link

“The Un­bi­ased Map”

JohnBuridan27 Jan 2019 19:08 UTC
14 points
1 comment1 min readLW link

Pre­dic­tion Con­test 2018: Scores and Retrospective

jbeshir27 Jan 2019 17:20 UTC
28 points
5 comments1 min readLW link

Freely Com­ply­ing With the Ideal: A The­ory of Hap­piness

Solnassant27 Jan 2019 12:28 UTC
20 points
2 comments5 min readLW link

Con­fes­sions of an Ab­strac­tion Hater

Martin Sustrik27 Jan 2019 5:50 UTC
12 points
4 comments2 min readLW link
(250bpm.com)

Río Grande: judg­ment calls

KatjaGrace27 Jan 2019 3:50 UTC
25 points
5 comments2 min readLW link
(worldlypositions.tumblr.com)

“Fore­cast­ing Trans­for­ma­tive AI: An Ex­pert Sur­vey”, Gruet­zemacher et al 2019

gwern27 Jan 2019 2:34 UTC
16 points
0 comments1 min readLW link
(arxiv.org)

Build­ing up to an In­ter­nal Fam­ily Sys­tems model

Kaj_Sotala26 Jan 2019 12:25 UTC
264 points
86 comments28 min readLW link2 reviews

Fu­ture di­rec­tions for nar­row value learning

Rohin Shah26 Jan 2019 2:36 UTC
12 points
4 comments4 min readLW link

[Question] For what do we need Su­per­in­tel­li­gent AI?

avturchin25 Jan 2019 15:01 UTC
14 points
18 comments1 min readLW link

“AlphaS­tar: Mas­ter­ing the Real-Time Strat­egy Game StarCraft II”, Deep­Mind [won 10 of 11 games against hu­man pros]

gwern24 Jan 2019 20:49 UTC
62 points
52 comments1 min readLW link
(deepmind.com)

Thoughts on re­ward en­g­ineer­ing

paulfchristiano24 Jan 2019 20:15 UTC
30 points
30 comments11 min readLW link

From Per­sonal to Pri­son Gangs: En­forc­ing Proso­cial Behavior

johnswentworth24 Jan 2019 18:07 UTC
146 points
26 comments5 min readLW link2 reviews

[Question] Is Agent Si­mu­lates Pre­dic­tor a “fair” prob­lem?

Chris_Leong24 Jan 2019 13:18 UTC
22 points
19 comments1 min readLW link

[Question] In what way has the gen­er­a­tion af­ter us “gone too far”?

Elo24 Jan 2019 10:22 UTC
8 points
3 comments1 min readLW link