Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

paulfchristianoOct 24, 2023, 10:21 PM
221 points
33 comments6 min readLW link

The Screen­play Method

Yeshua GodOct 24, 2023, 5:41 PM
−15 points
0 comments25 min readLW link

Blunt Razor

fryolysisOct 24, 2023, 5:27 PM
3 points
0 comments2 min readLW link

Hal­loween Problem

Saint BlasphemerOct 24, 2023, 4:46 PM
−10 points
1 comment1 min readLW link

Who is Harry Pot­ter? Some pre­dic­tions.

Donald HobsonOct 24, 2023, 4:14 PM
23 points
7 comments2 min readLW link

Book Re­view: Go­ing Infinite

ZviOct 24, 2023, 3:00 PM
246 points
113 comments97 min readLW link1 review
(thezvi.wordpress.com)

[In­ter­view w/​ Quintin Pope] Evolu­tion, val­ues, and AI Safety

fowlertmOct 24, 2023, 1:53 PM
11 points
0 comments1 min readLW link

Ly­ing is Cowardice, not Strategy

Oct 24, 2023, 1:24 PM
29 points
73 comments5 min readLW link
(cognition.cafe)

[Question] Any­one Else Us­ing Brilli­ant?

SableOct 24, 2023, 12:12 PM
19 points
0 comments1 min readLW link

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

otto.bartenOct 24, 2023, 10:11 AM
17 points
1 comment1 min readLW link

Linkpost: A Post Mortem on the Gino Case

LinchOct 24, 2023, 6:50 AM
89 points
7 comments2 min readLW link
(www.theorgplumber.com)

South Bay SSC Meetup, San Jose, Novem­ber 5th.

David FriedmanOct 24, 2023, 4:50 AM
2 points
1 comment1 min readLW link

AI Pause Will Likely Back­fire (Guest Post)

jsteinhardtOct 24, 2023, 4:30 AM
47 points
6 comments15 min readLW link
(bounded-regret.ghost.io)

Hu­man wanting

TsviBTOct 24, 2023, 1:05 AM
53 points
1 comment10 min readLW link

Towards Un­der­stand­ing Sy­co­phancy in Lan­guage Models

Oct 24, 2023, 12:30 AM
66 points
0 comments2 min readLW link
(arxiv.org)

Man­i­fold Hal­loween Hackathon

Austin ChenOct 23, 2023, 10:47 PM
8 points
0 comments1 min readLW link

Open Source Repli­ca­tion & Com­men­tary on An­thropic’s Dic­tionary Learn­ing Paper

Neel NandaOct 23, 2023, 10:38 PM
93 points
12 comments9 min readLW link

The Shut­down Prob­lem: An AI Eng­ineer­ing Puz­zle for De­ci­sion Theorists

EJTOct 23, 2023, 9:00 PM
79 points
22 commentsLW link
(philpapers.org)

AI Align­ment [In­cre­men­tal Progress Units] this Week (10/​22/​23)

Logan ZoellnerOct 23, 2023, 8:32 PM
22 points
0 comments6 min readLW link
(midwitalignment.substack.com)

z is not the cause of x

hrbigelowOct 23, 2023, 5:43 PM
6 points
2 comments9 min readLW link

Some of my pre­dictable up­dates on AI

Aaron_ScherOct 23, 2023, 5:24 PM
32 points
8 comments9 min readLW link

Pro­gram­matic back­doors: DNNs can use SGD to run ar­bi­trary state­ful computation

Oct 23, 2023, 4:37 PM
107 points
3 comments8 min readLW link

Ma­chine Un­learn­ing Eval­u­a­tions as In­ter­pretabil­ity Benchmarks

Oct 23, 2023, 4:33 PM
33 points
2 comments11 min readLW link

VLM-RM: Spec­i­fy­ing Re­wards with Nat­u­ral Language

Oct 23, 2023, 2:11 PM
20 points
2 comments5 min readLW link
(far.ai)

Con­tra Dance Dialect Survey

jefftkOct 23, 2023, 1:40 PM
11 points
0 comments1 min readLW link
(www.jefftk.com)

[Question] Which LessWrongers are (as­piring) YouTu­bers?

Mati_RoyOct 23, 2023, 1:21 PM
22 points
13 comments1 min readLW link

[Question] What is an “anti-Oc­camian prior”?

ZaneOct 23, 2023, 2:26 AM
35 points
22 comments1 min readLW link

AI Safety is Drop­ping the Ball on Clown Attacks

trevorOct 22, 2023, 8:09 PM
75 points
82 comments34 min readLW link

The Drown­ing Child

Tomás B.Oct 22, 2023, 4:39 PM
25 points
8 comments1 min readLW link

An­nounc­ing Timaeus

Oct 22, 2023, 11:59 AM
188 points
15 comments4 min readLW link

Into AI Safety—Epi­sode 0

jacobhaimesOct 22, 2023, 3:30 AM
5 points
1 comment1 min readLW link
(into-ai-safety.github.io)

Thoughts On (Solv­ing) Deep Deception

JozdienOct 21, 2023, 10:40 PM
72 points
6 comments6 min readLW link

Best effort beliefs

Adam ZernerOct 21, 2023, 10:05 PM
14 points
9 comments4 min readLW link

How toy mod­els of on­tol­ogy changes can be misleading

Stuart_ArmstrongOct 21, 2023, 9:13 PM
42 points
0 comments2 min readLW link

Soups as Spreads

jefftkOct 21, 2023, 8:30 PM
22 points
0 comments1 min readLW link
(www.jefftk.com)

Which COVID booster to get?

SameerishereOct 21, 2023, 7:43 PM
8 points
0 comments2 min readLW link

Align­ment Im­pli­ca­tions of LLM Suc­cesses: a De­bate in One Act

Zack_M_DavisOct 21, 2023, 3:22 PM
265 points
56 comments13 min readLW link2 reviews

How to find a good mov­ing service

Ziyue WangOct 21, 2023, 4:59 AM
8 points
0 comments3 min readLW link

Ap­ply for MATS Win­ter 2023-24!

Oct 21, 2023, 2:27 AM
104 points
6 comments5 min readLW link

[Question] Can we iso­late neu­rons that rec­og­nize fea­tures vs. those which have some other role?

Joshua ClancyOct 21, 2023, 12:30 AM
4 points
2 comments3 min readLW link

Mud­dling Along Is More Likely Than Dystopia

Jeffrey HeningerOct 20, 2023, 9:25 PM
88 points
10 comments8 min readLW link

What’s Hard About The Shut­down Problem

johnswentworthOct 20, 2023, 9:13 PM
98 points
33 comments4 min readLW link

Holly El­more and Rob Miles di­alogue on AI Safety Advocacy

Oct 20, 2023, 9:04 PM
162 points
30 comments27 min readLW link

TOMORROW: the largest AI Safety protest ever!

Holly_ElmoreOct 20, 2023, 6:15 PM
105 points
26 comments2 min readLW link

The Overkill Con­spir­acy Hypothesis

ymeskhoutOct 20, 2023, 4:51 PM
26 points
8 comments7 min readLW link

I Would Have Solved Align­ment, But I Was Wor­ried That Would Ad­vance Timelines

307thOct 20, 2023, 4:37 PM
122 points
33 comments9 min readLW link

In­ter­nal Tar­get In­for­ma­tion for AI Oversight

Paul CologneseOct 20, 2023, 2:53 PM
15 points
0 comments5 min readLW link

On the proper date for sols­tice celebrations

jchanOct 20, 2023, 1:55 PM
16 points
0 comments4 min readLW link

Are (at least some) Large Lan­guage Models Holo­graphic Me­mory Stores?

Bill BenzonOct 20, 2023, 1:07 PM
11 points
4 comments6 min readLW link

Mechanis­tic in­ter­pretabil­ity of LLM anal­ogy-making

SergiiOct 20, 2023, 12:53 PM
2 points
0 comments4 min readLW link
(grgv.xyz)