Progress links and tweets, 2022-08-31

jasoncrawford31 Aug 2022 21:54 UTC
13 points
4 comments1 min readLW link
(rootsofprogress.org)

Enantiodromia

ChristianKl31 Aug 2022 21:13 UTC
38 points
7 comments3 min readLW link

[Question] Sup­pos­ing Europe is headed for a se­ri­ous en­ergy crisis this win­ter, what can/​should one do as an in­di­vi­d­ual to pre­pare?

Erich_Grunewald31 Aug 2022 19:28 UTC
18 points
13 comments1 min readLW link

New 80,000 Hours prob­lem pro­file on ex­is­ten­tial risks from AI

Benjamin Hilton31 Aug 2022 17:36 UTC
28 points
6 comments7 min readLW link
(80000hours.org)

Grand Theft Education

Zvi31 Aug 2022 11:50 UTC
66 points
18 comments20 min readLW link
(thezvi.wordpress.com)

How much im­pact can any one man have?

GregorDeVillain31 Aug 2022 10:26 UTC
9 points
3 comments4 min readLW link

[Question] How might we make bet­ter use of AI ca­pa­bil­ities re­search for al­ign­ment pur­poses?

ghostwheel31 Aug 2022 4:19 UTC
11 points
4 comments1 min readLW link

[Question] AI Box Ex­per­i­ment: Are peo­ple still in­ter­ested?

Double31 Aug 2022 3:04 UTC
30 points
13 comments1 min readLW link

OC ACX/​LW in New­port Beach

Michael Michalchik31 Aug 2022 2:56 UTC
1 point
1 comment1 min readLW link

Sur­vey of NLP Re­searchers: NLP is con­tribut­ing to AGI progress; ma­jor catas­tro­phe plausible

Sam Bowman31 Aug 2022 1:39 UTC
92 points
6 comments2 min readLW link

And the word was “God”

pchvykov30 Aug 2022 21:13 UTC
−22 points
4 comments3 min readLW link

Wor­lds Where Iter­a­tive De­sign Fails

johnswentworth30 Aug 2022 20:48 UTC
190 points
30 comments10 min readLW link1 review

In­ner Align­ment via Superpowers

30 Aug 2022 20:01 UTC
37 points
13 comments4 min readLW link

ML Model At­tri­bu­tion Challenge [Linkpost]

aogara30 Aug 2022 19:34 UTC
11 points
0 comments1 min readLW link
(mlmac.io)

How likely is de­cep­tive al­ign­ment?

evhub30 Aug 2022 19:34 UTC
103 points
28 comments60 min readLW link

Built-In Bundling For Faster Loading

jefftk30 Aug 2022 19:20 UTC
15 points
0 comments2 min readLW link
(www.jefftk.com)

[Question] A bayesian up­dat­ing on ex­pert opinions

amarai30 Aug 2022 11:56 UTC
1 point
1 comment1 min readLW link

Any Utili­tar­i­anism Makes Sense As Policy

George3d630 Aug 2022 9:55 UTC
6 points
6 comments7 min readLW link
(www.epistem.ink)

A gen­tle primer on car­ing, in­clud­ing in strange senses, with applications

Kaarel30 Aug 2022 8:05 UTC
9 points
4 comments18 min readLW link

Mod­ified Guess Culture

konstell30 Aug 2022 2:30 UTC
5 points
5 comments1 min readLW link
(konstell.com)

[Question] What is the best cri­tique of AI ex­is­ten­tial risk ar­gu­ments?

joshc30 Aug 2022 2:18 UTC
6 points
11 comments1 min readLW link

How to plan for a rad­i­cally un­cer­tain fu­ture?

Kerry30 Aug 2022 2:14 UTC
57 points
35 comments1 min readLW link

EA & LW Fo­rums Weekly Sum­mary (21 Aug − 27 Aug 22′)

Zoe Williams30 Aug 2022 1:42 UTC
57 points
4 comments12 min readLW link

Can We Align a Self-Im­prov­ing AGI?

Peter S. Park30 Aug 2022 0:14 UTC
8 points
5 comments11 min readLW link

On the na­ture of help—a frame­work for helping

Faustify29 Aug 2022 20:42 UTC
3 points
2 comments13 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 4 - Why don’t we do what we think we should?

Gordon Seidoh Worley29 Aug 2022 19:25 UTC
15 points
6 comments13 min readLW link

[Question] How can I rec­on­cile the two most likely re­quire­ments for hu­man­i­ties near-term sur­vival.

Erlja Jkdf.29 Aug 2022 18:46 UTC
1 point
6 comments1 min readLW link

*New* Canada AI Safety & Gover­nance community

Wyatt Tessari L'Allié29 Aug 2022 18:45 UTC
21 points
0 comments1 min readLW link

Are Gen­er­a­tive World Models a Mesa-Op­ti­miza­tion Risk?

Thane Ruthenis29 Aug 2022 18:37 UTC
13 points
2 comments3 min readLW link

Se­quenc­ing Intro

jefftk29 Aug 2022 17:50 UTC
39 points
3 comments5 min readLW link
(www.jefftk.com)

How Do AI Timelines Affect Ex­is­ten­tial Risk?

Stephen McAleese29 Aug 2022 16:57 UTC
7 points
9 comments23 min readLW link

How might we al­ign trans­for­ma­tive AI if it’s de­vel­oped very soon?

HoldenKarnofsky29 Aug 2022 15:42 UTC
139 points
55 comments45 min readLW link1 review

An Au­dio In­tro­duc­tion to Nick Bostrom

PeterH29 Aug 2022 8:50 UTC
12 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Please Do Fight the Hypothetical

Lone Pine29 Aug 2022 8:35 UTC
18 points
6 comments3 min readLW link

Have you con­sid­ered get­ting rid of death?

Willa29 Aug 2022 1:31 UTC
20 points
19 comments1 min readLW link
(immortalityisgreat.substack.com)

(My un­der­stand­ing of) What Every­one in Tech­ni­cal Align­ment is Do­ing and Why

29 Aug 2022 1:23 UTC
412 points
89 comments38 min readLW link1 review

Break­ing down the train­ing/​de­ploy­ment dichotomy

Erik Jenner28 Aug 2022 21:45 UTC
30 points
3 comments3 min readLW link

More Clothes Over Time?

jefftk28 Aug 2022 20:30 UTC
30 points
1 comment1 min readLW link
(www.jefftk.com)

The Ex­pand­ing Mo­ral Cine­matic Universe

Raemon28 Aug 2022 18:42 UTC
64 points
9 comments14 min readLW link

An In­tro­duc­tion to Cur­rent The­o­ries of Consciousness

hohenheim28 Aug 2022 17:55 UTC
60 points
44 comments49 min readLW link

[Linkpost] Can lab-grown brains be­come con­scious?

Jack R28 Aug 2022 17:45 UTC
14 points
3 comments1 min readLW link

Robert Long On Why Ar­tifi­cial Sen­tience Might Matter

Michaël Trazzi28 Aug 2022 17:30 UTC
26 points
5 comments5 min readLW link
(theinsideview.ai)

Ar­tifi­cial Mo­ral Ad­vi­sors: A New Per­spec­tive from Mo­ral Psychology

David Gross28 Aug 2022 16:37 UTC
25 points
1 comment1 min readLW link
(dl.acm.org)

Pronunciations

Solenoid_Entity28 Aug 2022 11:43 UTC
15 points
7 comments2 min readLW link

First thing AI will do when it takes over is get fis­sion going

visiax28 Aug 2022 5:56 UTC
−2 points
0 comments1 min readLW link

Who or­dered al­ign­ment’s ap­ple?

Eleni Angelou28 Aug 2022 4:05 UTC
6 points
3 comments3 min readLW link

Suffi­ciently many Godzillas as an al­ign­ment strategy

14285728 Aug 2022 0:08 UTC
8 points
3 comments1 min readLW link

[Question] What would you ex­pect a mas­sive mul­ti­modal on­line fed­er­ated learner to be ca­pa­ble of?

Aryeh Englander27 Aug 2022 17:31 UTC
13 points
4 comments1 min readLW link

Basin broad­ness de­pends on the size and num­ber of or­thog­o­nal features

27 Aug 2022 17:29 UTC
36 points
21 comments6 min readLW link

In­for­mal se­man­tics and Orders

Q Home27 Aug 2022 4:17 UTC
14 points
10 comments26 min readLW link