Let Your Mind Be Not Fixed

Gordon Seidoh Worley31 Jul 2020 17:54 UTC
46 points
2 comments3 min readLW link

En­forc­ing Type Distinction

lionhearted (Sebastian Marshall)31 Jul 2020 11:39 UTC
25 points
1 comment2 min readLW link

“Go west, young man!”—Prefer­ences in (im­perfect) maps

Stuart_Armstrong31 Jul 2020 7:50 UTC
25 points
10 comments3 min readLW link

[Question] Can you gain weird­ness points?

NicholasKross31 Jul 2020 3:41 UTC
20 points
19 comments1 min readLW link

Free Ed­u­ca­tional and Re­search Resources

Rachel Shu31 Jul 2020 0:24 UTC
54 points
25 comments6 min readLW link

Sun­day Au­gust 2, 12pm (PDT) — talks by jim­ran­domh, john­swen­th­worth, Daniel Filan, Jacobian

30 Jul 2020 23:55 UTC
15 points
2 comments1 min readLW link

Deal­ing with Cu­ri­os­ity-Stoppers

adamShimi30 Jul 2020 22:05 UTC
50 points
6 comments10 min readLW link

[Question] What if memes are com­mon in highly ca­pa­ble minds?

Daniel Kokotajlo30 Jul 2020 20:45 UTC
37 points
13 comments2 min readLW link

PSA: Tag­ging is Awesome

abramdemski30 Jul 2020 17:52 UTC
76 points
19 comments1 min readLW link

Covid 7/​30: Whack a Mole

Zvi30 Jul 2020 15:50 UTC
39 points
4 comments10 min readLW link
(thezvi.wordpress.com)

[Question] Is the work on AI al­ign­ment rele­vant to GPT?

Richard_Kennaway30 Jul 2020 12:23 UTC
22 points
5 comments1 min readLW link

[Question] Would a halfway copied brain em­u­la­tion be at risk of hav­ing differ­ent val­ues/​iden­tity?

Ghatanathoah30 Jul 2020 5:43 UTC
9 points
8 comments2 min readLW link

Ra­tion­al­ist Read­ing Group (On­line)

NoSignalNoNoise30 Jul 2020 2:14 UTC
7 points
1 comment1 min readLW link

At­ten­tion is your scarcest resource

benkuhn30 Jul 2020 1:00 UTC
96 points
2 comments4 min readLW link
(www.benkuhn.net)

Learn­ing the prior and generalization

evhub29 Jul 2020 22:49 UTC
34 points
16 comments4 min readLW link

What Failure Looks Like: Distill­ing the Discussion

Ben Pace29 Jul 2020 21:49 UTC
81 points
14 comments7 min readLW link

New Paper on Herd Im­mu­nity Thresholds

Zvi29 Jul 2020 20:50 UTC
41 points
16 comments9 min readLW link
(thezvi.wordpress.com)

What a 20-year-lead in mil­i­tary tech might look like

Daniel Kokotajlo29 Jul 2020 20:10 UTC
75 points
47 comments16 min readLW link

En­gag­ing Se­ri­ously with Short Timelines

sapphire29 Jul 2020 19:21 UTC
43 points
21 comments3 min readLW link

[AN #110]: Learn­ing fea­tures from hu­man feed­back to en­able re­ward learning

Rohin Shah29 Jul 2020 17:20 UTC
13 points
2 comments10 min readLW link
(mailchi.mp)

The “best pre­dic­tor is mal­i­cious op­ti­miser” problem

Donald Hobson29 Jul 2020 11:49 UTC
14 points
10 comments2 min readLW link

Im­prov­ing lo­cal gov­er­nance in frag­ile states—prac­ti­cal les­sons from the field

Tim Liptrot29 Jul 2020 1:54 UTC
16 points
3 comments6 min readLW link

Pa­ram­e­ters of Privacy

Raemon29 Jul 2020 1:18 UTC
31 points
2 comments6 min readLW link

Pre­dic­tions for GPT-N

hippke29 Jul 2020 1:16 UTC
36 points
31 comments1 min readLW link

Tag­ging Open Call /​ Dis­cus­sion Thread

Ruby28 Jul 2020 21:58 UTC
65 points
118 comments4 min readLW link

Wiki-Tag FAQ

Ruby28 Jul 2020 21:57 UTC
39 points
4 comments13 min readLW link

Tags Dis­cus­sion/​Talk Thread

Ruby28 Jul 2020 21:57 UTC
30 points
81 comments1 min readLW link

[Question] What hap­pens to var­i­ance as neu­ral net­work train­ing is scaled? What does it im­ply about “lot­tery tick­ets”?

abramdemski28 Jul 2020 20:22 UTC
25 points
4 comments1 min readLW link

[Question] How will in­ter­net fo­rums like LW be able to defend against GPT-style spam?

ChristianKl28 Jul 2020 20:12 UTC
14 points
17 comments1 min readLW link

[Question] To what ex­tent are the scal­ing prop­er­ties of Trans­former net­works ex­cep­tional?

abramdemski28 Jul 2020 20:06 UTC
30 points
1 comment1 min readLW link

The Con­ceited Folly of Certainty

Noah Blaff28 Jul 2020 19:56 UTC
7 points
3 comments20 min readLW link

The Curse of Cur­sory Idealism

Noah Blaff28 Jul 2020 19:56 UTC
9 points
1 comment3 min readLW link

[Question] Does the lot­tery ticket hy­poth­e­sis sug­gest the scal­ing hy­poth­e­sis?

Daniel Kokotajlo28 Jul 2020 19:52 UTC
14 points
17 comments1 min readLW link

[Question] Prob­a­bil­ity that other ar­chi­tec­tures will scale as well as Trans­form­ers?

Daniel Kokotajlo28 Jul 2020 19:36 UTC
22 points
4 comments1 min readLW link

[Question] How should I back up and redo, in a pub­li­cly-ed­ited ar­ti­cle?

Jameson Quinn28 Jul 2020 19:07 UTC
7 points
1 comment1 min readLW link

Reread­ing At­las Shrugged

Vaniver28 Jul 2020 18:54 UTC
160 points
36 comments13 min readLW link1 review

FHI Re­port: How Will Na­tional Se­cu­rity Con­sid­er­a­tions Affect An­titrust De­ci­sions in AI? An Ex­am­i­na­tion of His­tor­i­cal Precedents

Cullen28 Jul 2020 18:34 UTC
2 points
0 comments1 min readLW link
(www.fhi.ox.ac.uk)

Del­e­gate a Forecast

Amandango28 Jul 2020 17:43 UTC
44 points
25 comments2 min readLW link
(forum.effectivealtruism.org)

As­set Prices Con­sis­tently Vio­late Effi­cient Mar­ket Hypothesis

Liron28 Jul 2020 14:21 UTC
11 points
3 comments1 min readLW link
(lt3000.blogspot.com)

A com­mu­nity-cu­rated repos­i­tory of in­ter­est­ing GPT-3 stuff

Rudi C28 Jul 2020 14:16 UTC
8 points
0 comments1 min readLW link
(github.com)

[Question] Billion­aire Economics

Virgil Kurkjian28 Jul 2020 3:01 UTC
22 points
31 comments1 min readLW link

[Question] What spe­cific dan­gers arise when ask­ing GPT-N to write an Align­ment Fo­rum post?

Matthew Barnett28 Jul 2020 2:56 UTC
44 points
14 comments1 min readLW link

The Fu­ture of Science

Richard_Ngo28 Jul 2020 2:43 UTC
21 points
2 comments7 min readLW link

Ba­sic Con­ver­sa­tional Co­or­di­na­tion: Micro-co­or­di­na­tion of In­ten­tion

Eli Tyre27 Jul 2020 22:41 UTC
40 points
7 comments2 min readLW link

[up­dated] how does gpt2′s train­ing cor­pus cap­ture in­ter­net dis­cus­sion? not well

nostalgebraist27 Jul 2020 22:30 UTC
25 points
3 comments2 min readLW link
(nostalgebraist.tumblr.com)

AI and Efficiency

DragonGod27 Jul 2020 20:58 UTC
9 points
1 comment1 min readLW link
(openai.com)

The Rise of Com­mon­sense Reasoning

DragonGod27 Jul 2020 19:01 UTC
8 points
0 comments1 min readLW link
(www.reddit.com)

Why You Might Want a New Way to Vi­su­al­ize Biases

dtm27 Jul 2020 17:30 UTC
18 points
2 comments3 min readLW link

Are we in an AI over­hang?

Andy Jones27 Jul 2020 12:48 UTC
259 points
106 comments4 min readLW link

Gen­er­al­iz­ing the Power-Seek­ing Theorems

TurnTrout27 Jul 2020 0:28 UTC
41 points
6 comments4 min readLW link