The 2021 Less Wrong Dar­win Game

lsusrSep 24, 2021, 9:16 PM
162 points
102 comments9 min readLW link

It’s Good Enough—A Party Game

Zmavli CaimleSep 24, 2021, 4:49 PM
3 points
0 comments4 min readLW link

Ex­pla­na­tions as Hard to Vary Assertions

AlexanderSep 24, 2021, 11:33 AM
17 points
27 comments8 min readLW link

How’s it go­ing with the Univer­sal Cul­tural Takeover? Part II

David Hugh-JonesSep 24, 2021, 9:37 AM
22 points
1 comment4 min readLW link
(wyclif.substack.com)

Com­mon knowl­edge about Lev­er­age Re­search 1.0

BayAreaHumanSep 24, 2021, 6:56 AM
213 points
212 comments5 min readLW link

Book Re­view: Who We Are and How We Got Here

Yair HalberstadtSep 24, 2021, 5:05 AM
16 points
4 comments15 min readLW link

Carte­sian Frames and Fac­tored Sets on ArXiv

Scott GarrabrantSep 24, 2021, 4:58 AM
38 points
0 comments1 min readLW link

Walkie-Talkies

jefftkSep 24, 2021, 1:10 AM
33 points
8 comments2 min readLW link
(www.jefftk.com)

Fore­cast­ing Trans­for­ma­tive AI, Part 1: What Kind of AI?

HoldenKarnofskySep 24, 2021, 12:46 AM
17 points
17 comments9 min readLW link

Shared Frames Are Cap­i­tal In­vest­ments in Coordination

johnswentworthSep 23, 2021, 11:24 PM
93 points
6 comments14 min readLW link1 review

Re­view: Mar­tyr Made Podcast

ElizabethSep 23, 2021, 8:10 PM
22 points
0 comments8 min readLW link
(acesounderglass.com)

Matt Lev­ine spots IRL Paper­clip Max­i­mizer in Reddit

NebuSep 23, 2021, 7:10 PM
9 points
2 comments2 min readLW link

How’s it go­ing with the Univer­sal Cul­tural Takeover? Part I

David Hugh-JonesSep 23, 2021, 7:07 PM
20 points
13 comments8 min readLW link
(wyclif.substack.com)

What is Com­pute? - Trans­for­ma­tive AI and Com­pute [1/​4]

lennartSep 23, 2021, 4:25 PM
27 points
9 comments19 min readLW link

Covid 9/​23: There Is a War

ZviSep 23, 2021, 1:30 PM
69 points
30 comments24 min readLW link
(thezvi.wordpress.com)

“Ra­tional Agents Win”

Isaac KingSep 23, 2021, 7:59 AM
8 points
33 comments2 min readLW link

Neu­ral net /​ de­ci­sion tree hy­brids: a po­ten­tial path to­ward bridg­ing the in­ter­pretabil­ity gap

Nathan Helm-BurgerSep 23, 2021, 12:38 AM
21 points
2 comments12 min readLW link

[Book Re­view] Altered Traits

lsusrSep 23, 2021, 12:33 AM
68 points
12 comments1 min readLW link

[Question] How dan­ger­ous is Long COVID for kids?

ViliamSep 22, 2021, 10:29 PM
27 points
3 comments1 min readLW link

[Sum­mary] “In­tro­duc­tion to Elec­tro­dy­nam­ics” by David Griffiths—Part 1

lsusrSep 22, 2021, 10:22 PM
25 points
4 comments4 min readLW link

Can­cel­led: Bangkok, Thailand – ACX Mee­tups Every­where 2021

Robert HöglundSep 22, 2021, 7:58 PM
7 points
0 comments1 min readLW link

Robin Han­son’s Grabby Aliens model ex­plained—part 1

WriterSep 22, 2021, 6:51 PM
72 points
30 comments8 min readLW link1 review
(youtu.be)

[Question] Weird mod­els of coun­try de­vel­op­ment?

Connor_FlexmanSep 22, 2021, 5:39 PM
7 points
7 comments1 min readLW link

[AN #165]: When large mod­els are more likely to lie

Rohin ShahSep 22, 2021, 5:30 PM
23 points
0 comments8 min readLW link
(mailchi.mp)

[Question] What are good mod­els of col­lu­sion in AI?

EconomicModelSep 22, 2021, 3:16 PM
7 points
1 comment1 min readLW link

[Question] Why do we talk about autism (spec­trum) with­out talk­ing about bor­der­line p.d.?

MoritzGSep 22, 2021, 2:32 PM
−11 points
8 comments1 min readLW link

Ac­ci­den­tal Optimizers

aysajanSep 22, 2021, 1:27 PM
7 points
2 comments3 min readLW link

[Question] How do you learn to take more beau­tiful pic­tures with a cam­era?

ChristianKlSep 22, 2021, 12:52 PM
11 points
8 comments1 min readLW link

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanSSep 22, 2021, 6:29 AM
5 points
2 comments1 min readLW link

Seat­tle, WA – Oc­to­ber 2021 ACX Meetup

Optimization ProcessSep 22, 2021, 5:30 AM
7 points
5 comments1 min readLW link

In­sights from Modern Prin­ci­ples of Economics

TurnTroutSep 22, 2021, 5:19 AM
81 points
64 comments1 min readLW link

Petrov Day 2021: Mu­tu­ally As­sured Destruc­tion?

RubySep 22, 2021, 1:04 AM
99 points
96 comments4 min readLW link

Red­wood Re­search’s cur­rent project

BuckSep 21, 2021, 11:30 PM
145 points
29 comments15 min readLW link1 review

GiveWell Dona­tion Matching

jefftkSep 21, 2021, 10:50 PM
10 points
5 comments2 min readLW link
(www.jefftk.com)

The Effec­tive­ness Of Masks is Limited

Mike HarrisSep 21, 2021, 5:03 PM
24 points
7 comments6 min readLW link

Three enig­mas at the heart of our reasoning

Alex FlintSep 21, 2021, 4:52 PM
56 points
66 comments9 min readLW link1 review

David Wolpert on Knowledge

Alex FlintSep 21, 2021, 1:54 AM
33 points
3 comments13 min readLW link

An­nounc­ing the Vi­talik Bu­terin Fel­low­ships in AI Ex­is­ten­tial Safety!

DanielFilanSep 21, 2021, 12:33 AM
64 points
2 comments1 min readLW link
(grants.futureoflife.org)

Toronto, ON—ACX/​SSC/​LW Meetup + Book Exchange

Sean AubinSep 21, 2021, 12:18 AM
2 points
1 comment1 min readLW link

[Question] Search for repli­ca­tion experiments

Andrew VlahosSep 20, 2021, 10:28 PM
3 points
1 comment1 min readLW link

Emo­tional microscope

pchvykovSep 20, 2021, 9:37 PM
3 points
9 comments3 min readLW link

Ten Hun­dred Megaseconds

philhSep 20, 2021, 9:30 PM
7 points
6 comments2 min readLW link
(reasonableapproximation.net)

[Question] How should dance venues best pro­tect the drinks of at­ten­dees?

Maxwell PetersonSep 20, 2021, 7:32 PM
10 points
22 comments1 min readLW link

Or­di­nary Peo­ple and Ex­traor­di­nary Evil: A Re­port on the Beguil­ings of Evil

David GrossSep 20, 2021, 3:19 PM
58 points
31 comments4 min readLW link

AI, learn to be con­ser­va­tive, then learn to be less so: re­duc­ing side-effects, learn­ing pre­served fea­tures, and go­ing be­yond conservatism

Stuart_ArmstrongSep 20, 2021, 11:56 AM
14 points
4 comments3 min readLW link

[Question] How much should you be will­ing to pay for an AGI?

Logan ZoellnerSep 20, 2021, 11:51 AM
11 points
5 comments1 min readLW link

Sig­moids be­hav­ing badly: arXiv paper

Stuart_ArmstrongSep 20, 2021, 10:29 AM
24 points
1 comment1 min readLW link

[Book Re­view] “The Align­ment Prob­lem” by Brian Christian

lsusrSep 20, 2021, 6:36 AM
72 points
16 comments6 min readLW link

Test­ing The Nat­u­ral Ab­strac­tion Hy­poth­e­sis: Pro­ject Update

johnswentworthSep 20, 2021, 3:44 AM
88 points
17 comments8 min readLW link1 review

Bel­mont/​Mid-Pen­in­sula ACX Meetup

moshezadkaSep 20, 2021, 12:32 AM
1 point
0 commentsLW link