“Just hiring peo­ple” is some­times still ac­tu­ally possible

lc5 Aug 2022 21:44 UTC
38 points
11 comments5 min readLW link

The need for certainty

Thomas McMurtry5 Aug 2022 20:18 UTC
2 points
0 comments4 min readLW link

Rant on Prob­lem Fac­tor­iza­tion for Alignment

johnswentworth5 Aug 2022 19:23 UTC
90 points
51 comments6 min readLW link

Coun­ter­fac­tu­als are Con­fus­ing be­cause of an On­tolog­i­cal Shift

Chris_Leong5 Aug 2022 19:03 UTC
17 points
35 comments2 min readLW link

Orange county ACX/​Less-Wrong dis­cus­sion group and hang-out. (or­ange county)

Michael Michalchik5 Aug 2022 18:25 UTC
2 points
0 comments1 min readLW link

Gears-Level Un­der­stand­ing, De­liber­ate Perfor­mance, The Strate­gic Level

CFAR!Duncan5 Aug 2022 17:11 UTC
28 points
3 comments5 min readLW link

[Question] COVID-19 Group Test­ing Post-mortem?

gwern5 Aug 2022 16:32 UTC
72 points
6 comments2 min readLW link

Where are the red lines for AI?

Karl von Wendt5 Aug 2022 9:34 UTC
25 points
10 comments6 min readLW link

Bridg­ing Ex­pected Utility Max­i­miza­tion and Optimization

Whispermute5 Aug 2022 8:18 UTC
25 points
5 comments14 min readLW link

Deon­tol­ogy and Tool AI

Nathan11235 Aug 2022 5:20 UTC
4 points
5 comments6 min readLW link

An at­tempt to un­der­stand the Com­plex­ity of Values

Dalton Mabery5 Aug 2022 4:43 UTC
3 points
0 comments5 min readLW link

$20K In Boun­ties for AI Safety Public Materials

5 Aug 2022 2:52 UTC
71 points
9 comments6 min readLW link

Two Kids Crosswise

jefftk5 Aug 2022 2:40 UTC
16 points
3 comments1 min readLW link
(www.jefftk.com)

The Fal­ling Drill

Screwtape5 Aug 2022 0:08 UTC
46 points
3 comments2 min readLW link

Con­ver­gence Towards World-Models: A Gears-Level Model

Thane Ruthenis4 Aug 2022 23:31 UTC
38 points
1 comment13 min readLW link

Cam­bist Booking

Screwtape4 Aug 2022 22:40 UTC
20 points
3 comments4 min readLW link

Cal­ibra­tion Trivia

Screwtape4 Aug 2022 22:31 UTC
11 points
9 comments3 min readLW link

Monthly Shorts 7/​22

Celer4 Aug 2022 22:30 UTC
5 points
0 comments3 min readLW link
(keller.substack.com)

The Prag­mas­cope Idea

johnswentworth4 Aug 2022 21:52 UTC
58 points
19 comments3 min readLW link

Run­ning a Ba­sic Meetup

Screwtape4 Aug 2022 21:49 UTC
20 points
1 comment2 min readLW link

Fiber arts, mys­te­ri­ous do­dec­a­he­drons, and wait­ing on “Eureka!”

eukaryote4 Aug 2022 20:37 UTC
124 points
15 comments9 min readLW link1 review
(eukaryotewritesblog.com)

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Just Learning4 Aug 2022 19:12 UTC
5 points
1 comment1 min readLW link

[Question] AI al­ign­ment: Would a lazy self-preser­va­tion in­stinct be suffi­cient?

BrainFrog4 Aug 2022 17:53 UTC
−1 points
4 comments1 min readLW link

So­cratic Duck­ing, OODA Loops, Frame-by-Frame Debugging

CFAR!Duncan4 Aug 2022 17:44 UTC
25 points
1 comment5 min readLW link

What do ML re­searchers think about AI in 2022?

KatjaGrace4 Aug 2022 15:40 UTC
219 points
33 comments3 min readLW link
(aiimpacts.org)

In­ter­pretabil­ity isn’t Free

Joel Burget4 Aug 2022 15:02 UTC
10 points
1 comment2 min readLW link

Covid 8/​4/​22: Rebound

Zvi4 Aug 2022 11:20 UTC
36 points
0 comments11 min readLW link
(thezvi.wordpress.com)

High Reli­a­bil­ity Orgs, and AI Companies

Raemon4 Aug 2022 5:45 UTC
86 points
7 comments12 min readLW link1 review

Sur­prised by ELK re­port’s coun­terex­am­ple to De­bate, IDA

Evan R. Murphy4 Aug 2022 2:12 UTC
18 points
0 comments5 min readLW link

Clap­ping Lower

jefftk4 Aug 2022 2:10 UTC
38 points
7 comments1 min readLW link
(www.jefftk.com)

[Question] How do I know if my first post should be a post, or a ques­tion?

Nathan11234 Aug 2022 1:46 UTC
3 points
4 comments1 min readLW link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and coordination

Alex Lintz3 Aug 2022 23:15 UTC
22 points
0 comments12 min readLW link

Pre­cur­sor check­ing for de­cep­tive alignment

evhub3 Aug 2022 22:56 UTC
24 points
0 comments14 min readLW link

Trans­former lan­guage mod­els are do­ing some­thing more general

Numendil3 Aug 2022 21:13 UTC
53 points
6 comments2 min readLW link

[Question] Some doubts about Non Su­per­in­tel­li­gent AIs

aditya malik3 Aug 2022 19:55 UTC
0 points
4 comments1 min readLW link

An­nounc­ing Squig­gle: Early Access

ozziegooen3 Aug 2022 19:48 UTC
51 points
7 comments7 min readLW link
(forum.effectivealtruism.org)

Sur­vey: What (de)mo­ti­vates you about AI risk?

Daniel_Friedrich3 Aug 2022 19:17 UTC
1 point
0 comments1 min readLW link
(forms.gle)

Ex­ter­nal­ized rea­son­ing over­sight: a re­search di­rec­tion for lan­guage model alignment

tamera3 Aug 2022 12:03 UTC
130 points
23 comments6 min readLW link

Open & Wel­come Thread—Aug/​Sep 2022

Thomas3 Aug 2022 10:22 UTC
9 points
32 comments1 min readLW link

[Question] How does one rec­og­nize in­for­ma­tion and differ­en­ti­ate it from noise?

M. Y. Zuo3 Aug 2022 3:57 UTC
4 points
29 comments1 min readLW link

Law-Fol­low­ing AI 4: Don’t Rely on Vi­car­i­ous Liability

Cullen2 Aug 2022 23:26 UTC
5 points
2 comments3 min readLW link

Two-year up­date on my per­sonal AI timelines

Ajeya Cotra2 Aug 2022 23:07 UTC
288 points
60 comments16 min readLW link

What are the Red Flags for Neu­ral Net­work Suffer­ing? - Seeds of Science call for reviewers

rogersbacon2 Aug 2022 22:37 UTC
24 points
6 comments1 min readLW link

Againstness

CFAR!Duncan2 Aug 2022 19:29 UTC
47 points
7 comments9 min readLW link

(Sum­mary) Se­quence High­lights—Think­ing Bet­ter on Purpose

qazzquimby2 Aug 2022 17:45 UTC
33 points
3 comments11 min readLW link

Progress links and tweets, 2022-08-02

jasoncrawford2 Aug 2022 17:03 UTC
9 points
0 comments1 min readLW link
(rootsofprogress.org)

[Question] I want to donate some money (not much, just what I can af­ford) to AGI Align­ment re­search, to what­ever or­ga­ni­za­tion has the best chance of mak­ing sure that AGI goes well and doesn’t kill us all. What are my best op­tions, where can I make the most differ­ence per dol­lar?

lumenwrites2 Aug 2022 12:08 UTC
15 points
9 comments1 min readLW link

Think­ing with­out pri­ors?

Q Home2 Aug 2022 9:17 UTC
7 points
0 comments9 min readLW link

[Question] Would quan­tum im­mor­tal­ity mean sub­jec­tive im­mor­tal­ity?

n0ah2 Aug 2022 4:54 UTC
2 points
10 comments1 min readLW link

Turbocharging

CFAR!Duncan2 Aug 2022 0:01 UTC
50 points
3 comments9 min readLW link