[Question] Book recom­men­da­tions for the his­tory of ML?

Eleni Angelou28 Dec 2022 23:50 UTC
2 points
2 comments1 min readLW link

Rock-Paper-Scis­sors Can Be Weird

winwonce28 Dec 2022 23:12 UTC
14 points
3 comments1 min readLW link

200 COP in MI: The Case for Analysing Toy Lan­guage Models

Neel Nanda28 Dec 2022 21:07 UTC
39 points
3 comments7 min readLW link

200 Con­crete Open Prob­lems in Mechanis­tic In­ter­pretabil­ity: Introduction

Neel Nanda28 Dec 2022 21:06 UTC
103 points
0 comments10 min readLW link

Effec­tive ways to find love?

anonymoususer28 Dec 2022 20:46 UTC
8 points
6 comments1 min readLW link

Clas­si­cal logic based on propo­si­tions-as-sub­s­in­gle­ton-types

Thomas Kehrenberg28 Dec 2022 20:16 UTC
3 points
0 comments16 min readLW link

In Defense of Wrap­per-Minds

Thane Ruthenis28 Dec 2022 18:28 UTC
23 points
38 comments3 min readLW link

[Question] What is the best way to ap­proach Ex­pected Value calcu­la­tions when pay­offs are highly skewed?

jmh28 Dec 2022 14:42 UTC
8 points
16 comments1 min readLW link

Band­wagon effect: Bias in Eval­u­at­ing AGI X-Risks

28 Dec 2022 7:54 UTC
−1 points
0 comments1 min readLW link

Get­ting up to Speed on the Speed Prior in 2022

robertzk28 Dec 2022 7:49 UTC
36 points
5 comments65 min readLW link

[Question] World su­per­pow­ers, par­tic­u­larly the United States, still main­tain large con­ven­tional mil­i­taries de­spite nu­clear de­ter­rence. Why?

niederman28 Dec 2022 5:38 UTC
9 points
8 comments1 min readLW link
(maxniederman.com)

[Question] What does “prob­a­bil­ity” re­ally mean?

sisyphus28 Dec 2022 3:20 UTC
5 points
20 comments1 min readLW link

Zoom­ing the Chrome Au­dio Player

jefftk28 Dec 2022 2:30 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

28 Dec 2022 2:03 UTC
175 points
34 comments2 min readLW link

South Bay ACX/​LW Meetup

IS28 Dec 2022 1:59 UTC
3 points
0 comments1 min readLW link

Re­gard­ing Blake Le­moine’s claim that LaMDA is ‘sen­tient’, he might be right (sorta), but per­haps not for the rea­sons he thinks

philosophybear28 Dec 2022 1:55 UTC
9 points
1 comment6 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 5 - How do we know what we know?

Gordon Seidoh Worley28 Dec 2022 1:28 UTC
10 points
2 comments12 min readLW link

Is check­ing that a state of the world is not dystopian eas­ier than con­struct­ing a non-dystopian state?

No77e27 Dec 2022 20:57 UTC
5 points
3 comments1 min readLW link

Crypto-cur­rency as pro-al­ign­ment mechanism

False Name27 Dec 2022 17:45 UTC
−10 points
2 comments2 min readLW link

My Reser­va­tions about Dis­cov­er­ing La­tent Knowl­edge (Burns, Ye, et al)

Robert_AIZI27 Dec 2022 17:27 UTC
50 points
0 comments4 min readLW link
(aizi.substack.com)

Things that can kill you quickly: What ev­ery­one should know about first aid

jasoncrawford27 Dec 2022 16:23 UTC
166 points
21 comments2 min readLW link1 review
(jasoncrawford.org)

[Question] Why The Fo­cus on Ex­pected Utility Max­imisers?

DragonGod27 Dec 2022 15:49 UTC
116 points
84 comments3 min readLW link

Pre­sump­tive Listen­ing: stick­ing to fa­mil­iar con­cepts and miss­ing the outer rea­son­ing paths

Remmelt27 Dec 2022 15:40 UTC
−14 points
8 comments2 min readLW link
(mflb.com)

Mere ex­po­sure effect: Bias in Eval­u­at­ing AGI X-Risks

27 Dec 2022 14:05 UTC
0 points
2 comments1 min readLW link

Hous­ing and Trans­porta­tion Roundup #2

Zvi27 Dec 2022 13:10 UTC
25 points
0 comments12 min readLW link
(thezvi.wordpress.com)

[Question] Are tul­pas moral pa­tients?

ChristianKl27 Dec 2022 11:30 UTC
16 points
28 comments1 min readLW link

Reflec­tions on my 5-month al­ign­ment up­skil­ling grant

Jay Bailey27 Dec 2022 10:51 UTC
82 points
4 comments8 min readLW link

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

27 Dec 2022 10:34 UTC
5 points
0 comments5 min readLW link
(mflb.com)

In­tro­duc­tion: Bias in Eval­u­at­ing AGI X-Risks

27 Dec 2022 10:27 UTC
1 point
0 comments3 min readLW link

MDPs and the Bel­l­man Equa­tion, In­tu­itively Explained

Jack O'Brien27 Dec 2022 5:50 UTC
11 points
3 comments14 min readLW link

How ‘Hu­man-Hu­man’ dy­nam­ics give way to ‘Hu­man-AI’ and then ‘AI-AI’ dynamics

27 Dec 2022 3:16 UTC
−2 points
5 comments2 min readLW link
(mflb.com)

Nine Points of Col­lec­tive Insanity

27 Dec 2022 3:14 UTC
−2 points
3 comments1 min readLW link
(mflb.com)

Frac­tional Resignation

jefftk27 Dec 2022 2:30 UTC
18 points
6 comments1 min readLW link
(www.jefftk.com)

[Question] What poli­cies have most thor­oughly crip­pled (oth­er­wise-promis­ing) in­dus­tries or tech­nolo­gies?

benwr27 Dec 2022 2:25 UTC
40 points
4 comments1 min readLW link

Re­cent ad­vances in Nat­u­ral Lan­guage Pro­cess­ing—Some Woolly spec­u­la­tions (2019 es­say on se­man­tics and lan­guage mod­els)

philosophybear27 Dec 2022 2:11 UTC
1 point
0 comments7 min readLW link

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

DragonGod27 Dec 2022 0:47 UTC
12 points
9 comments2 min readLW link

Can we effi­ciently dis­t­in­guish differ­ent mechanisms?

paulfchristiano27 Dec 2022 0:20 UTC
88 points
30 comments16 min readLW link
(ai-alignment.com)

Air-gap­ping eval­u­a­tion and support

Ryan Kidd26 Dec 2022 22:52 UTC
53 points
1 comment2 min readLW link

Slightly against al­ign­ing with neo-luddites

Matthew Barnett26 Dec 2022 22:46 UTC
104 points
31 comments4 min readLW link

Avoid­ing per­pet­ual risk from TAI

scasper26 Dec 2022 22:34 UTC
15 points
6 comments5 min readLW link

An­nounc­ing: The In­de­pen­dent AI Safety Registry

Shoshannah Tekofsky26 Dec 2022 21:22 UTC
53 points
9 comments1 min readLW link

Are men harder to help?

braces26 Dec 2022 21:11 UTC
35 points
1 comment2 min readLW link

[Question] How much should I up­date on the fact that my den­tist is named Den­nis?

MichaelDickens26 Dec 2022 19:11 UTC
2 points
3 comments1 min readLW link

Theod­icy and the simu­la­tion hy­poth­e­sis, or: The prob­lem of simu­la­tor evil

philosophybear26 Dec 2022 18:55 UTC
6 points
12 comments19 min readLW link
(philosophybear.substack.com)

Safety of Self-Assem­bled Neu­ro­mor­phic Hardware

Can26 Dec 2022 18:51 UTC
15 points
2 comments10 min readLW link
(forum.effectivealtruism.org)

Co­her­ent ex­trap­o­lated dreaming

Alex Flint26 Dec 2022 17:29 UTC
38 points
10 comments17 min readLW link

An overview of some promis­ing work by ju­nior al­ign­ment researchers

Akash26 Dec 2022 17:23 UTC
34 points
0 comments4 min readLW link

Sols­tice song: Here Lies the Dragon

jchan26 Dec 2022 16:08 UTC
8 points
1 comment2 min readLW link

The Use­ful­ness Paradigm

Aprillion (Peter Hozák)26 Dec 2022 13:23 UTC
3 points
4 comments1 min readLW link

Look­ing Back on Posts From 2022

Zvi26 Dec 2022 13:20 UTC
49 points
8 comments17 min readLW link
(thezvi.wordpress.com)