The “no sand­bag­ging on check­able tasks” hypothesis

Joe CarlsmithJul 31, 2023, 11:06 PM
61 points
14 comments9 min readLW link

A So­cial His­tory of Truth

VaniverJul 31, 2023, 10:49 PM
64 points
2 comments14 min readLW link

Water­mark­ing con­sid­ered over­rated?

DanielFilanJul 31, 2023, 9:36 PM
19 points
4 comments1 min readLW link

What The Lord of the Rings Teaches Us About AI Alignment

Jeffrey HeningerJul 31, 2023, 8:16 PM
24 points
12 comments7 min readLW link

The “spel­ling mir­a­cle”: GPT-3 spel­ling abil­ities and glitch to­kens revisited

mwatkinsJul 31, 2023, 7:47 PM
85 points
29 comments20 min readLW link

“Build­ing a House” Review

jefftkJul 31, 2023, 7:20 PM
62 points
6 comments1 min readLW link
(www.jefftk.com)

The Mean­ing of Shog­goth AI Memes

Dan SmithJul 31, 2023, 6:52 PM
−5 points
5 comments2 min readLW link

[Question] Is there any ex­ist­ing term sum­ma­riz­ing non-scal­able over­sight meth­ods in outer al­ign­ment?

Allen ShenJul 31, 2023, 5:31 PM
1 point
0 comments1 min readLW link

Lack of So­cial Grace Is an Epistemic Virtue

Zack_M_DavisJul 31, 2023, 4:38 PM
41 points
105 comments4 min readLW link2 reviews

Thoughts on shar­ing in­for­ma­tion about lan­guage model capabilities

paulfchristianoJul 31, 2023, 4:04 PM
210 points
44 comments11 min readLW link1 review

Trad­ing off com­pute in train­ing and in­fer­ence (Overview)

Pablo VillalobosJul 31, 2023, 4:03 PM
42 points
2 comments7 min readLW link
(epochai.org)

Open Prob­lems and Fun­da­men­tal Limi­ta­tions of RLHF

scasperJul 31, 2023, 3:31 PM
66 points
6 comments2 min readLW link
(arxiv.org)

“Not Ne­c­es­sar­ily”

Benjamin HendricksJul 31, 2023, 3:19 PM
24 points
2 comments2 min readLW link

How to find AI al­ign­ment re­searchers to col­lab­o­rate with?

Florian DietzJul 31, 2023, 9:05 AM
2 points
2 comments1 min readLW link

[Question] Is Kennedy a Nazi?

Pee DoomJul 31, 2023, 8:51 AM
−12 points
10 comments2 min readLW link

Is Light Drink­ing Pro­tec­tive?

jefftkJul 31, 2023, 3:00 AM
45 points
8 comments2 min readLW link
(www.jefftk.com)

EU’s AI am­bi­tions at risk as US pushes to wa­ter down in­ter­na­tional treaty (linkpost)

micJul 31, 2023, 12:34 AM
10 points
0 comments4 min readLW link
(www.euractiv.com)

The rise of AI in cybercrime

BobyResearcherJul 30, 2023, 8:19 PM
−15 points
1 comment2 min readLW link
(riseofAIincybercryme)

SSA vs. SIA: how fu­ture pop­u­la­tion may provide ev­i­dence for or against the foun­da­tions of poli­ti­cal liberalism

jJul 30, 2023, 8:18 PM
−6 points
10 comments55 min readLW link

Ra­tion­al­iza­tion Max­i­mizes Ex­pected Value

Kevin DorstJul 30, 2023, 8:11 PM
19 points
10 comments7 min readLW link
(kevindorst.substack.com)

Apollo Neuro Results

ElizabethJul 30, 2023, 6:40 PM
85 points
17 comments3 min readLW link
(acesounderglass.com)

Hilbert’s Triumph, Church and Tur­ing’s failure, and what it means (Post #2)

Noosphere89Jul 30, 2023, 2:33 PM
−5 points
16 comments15 min readLW link

[Question] Spe­cific Ar­gu­ments against open source LLMs?

IknownothingJul 30, 2023, 2:27 PM
4 points
2 comments1 min readLW link

So­cial­ism in large organizations

Adam ZernerJul 30, 2023, 7:25 AM
7 points
16 comments2 min readLW link

How to make real-money pre­dic­tion mar­kets on ar­bi­trary top­ics (Out­dated)

yutakaJul 30, 2023, 2:11 AM
57 points
13 comments3 min readLW link

[Question] Does de­cid­abil­ity of a the­ory im­ply com­plete­ness of the the­ory?

Noosphere89Jul 29, 2023, 11:53 PM
6 points
12 comments1 min readLW link

[Question] If I showed the EQ-SQ the­ory’s find­ings to be due to mea­sure­ment bias, would any­one change their minds about it?

tailcalledJul 29, 2023, 7:38 PM
23 points
13 comments1 min readLW link

Self-driv­ing car bets

paulfchristianoJul 29, 2023, 6:10 PM
236 points
44 comments5 min readLW link
(sideways-view.com)

The Parable of the Dag­ger—The Animation

WriterJul 29, 2023, 2:03 PM
20 points
6 comments1 min readLW link
(youtu.be)

Are Guitars Ob­so­lete?

jefftkJul 29, 2023, 1:20 PM
11 points
8 comments2 min readLW link
(www.jefftk.com)

NAMSI: A promis­ing ap­proach to alignment

Georgeo57Jul 29, 2023, 7:03 AM
−6 points
6 comments1 min readLW link

Un­der­stand­ing and Align­ing a Hu­man-like In­duc­tive Bias with Cog­ni­tive Science: a Re­view of Re­lated Liter­a­ture

Claire ShortJul 29, 2023, 6:10 AM
27 points
0 comments12 min readLW link

Why You Should Never Up­date Your Beliefs

Arjun PanicksseryJul 29, 2023, 12:27 AM
76 points
18 comments4 min readLW link1 review
(arjunpanickssery.substack.com)

Thoughts about the Mechanis­tic In­ter­pretabil­ity Challenge #2 (EIS VII #2)

RGRGRGJul 28, 2023, 8:44 PM
24 points
5 comments20 min readLW link

Be­cause of Lay­erNorm, Direc­tions in GPT-2 MLP Lay­ers are Monosemantic

ojorgensenJul 28, 2023, 7:43 PM
13 points
3 comments13 min readLW link

When can we trust model eval­u­a­tions?

evhubJul 28, 2023, 7:42 PM
166 points
10 comments10 min readLW link1 review

Yes, It’s Sub­jec­tive, But Why All The Crabs?

johnswentworthJul 28, 2023, 7:35 PM
250 points
15 comments6 min readLW link

Semaglu­tide and Muscle

5houtJul 28, 2023, 6:36 PM
15 points
14 comments5 min readLW link

Dou­ble Crux in a Box

ScrewtapeJul 28, 2023, 5:55 PM
8 points
3 comments1 min readLW link

Gra­di­ent de­scent might see the di­rec­tion of the op­ti­mum from far away

Mikhail SaminJul 28, 2023, 4:19 PM
70 points
13 comments4 min readLW link

Progress links di­gest, 2023-07-28: The deca­dent op­u­lence of mod­ern capitalism

jasoncrawfordJul 28, 2023, 2:36 PM
16 points
3 comments3 min readLW link
(rootsofprogress.org)

AI Aware­ness through In­ter­ac­tion with Blatantly Alien Models

VojtaKovarikJul 28, 2023, 8:41 AM
7 points
5 comments3 min readLW link

You don’t get to have cool flaws

Neil Jul 28, 2023, 5:37 AM
78 points
25 comments2 min readLW link3 reviews

Re­duc­ing syco­phancy and im­prov­ing hon­esty via ac­ti­va­tion steering

Nina Panickssery28 Jul 2023 2:46 UTC
122 points
18 comments9 min readLW link1 review

Mech In­terp Puz­zle 2: Word2Vec Style Embeddings

Neel Nanda28 Jul 2023 0:50 UTC
41 points
4 comments2 min readLW link

ETFE windows

bhauth28 Jul 2023 0:46 UTC
31 points
4 comments2 min readLW link
(www.bhauth.com)

A Short Memo on AI In­ter­pretabil­ity Rain­bows

scasper27 Jul 2023 23:05 UTC
18 points
0 comments2 min readLW link

Pul­ling the Rope Side­ways: Em­piri­cal Test Results

Daniel Kokotajlo27 Jul 2023 22:18 UTC
61 points
18 comments1 min readLW link

A $10k retroac­tive grant for VaccinateCA

Austin Chen27 Jul 2023 18:14 UTC
82 points
0 commentsLW link
(manifund.org)

Prefer­ence Ag­gre­ga­tion as Bayesian Inference

beren27 Jul 2023 17:59 UTC
14 points
1 comment1 min readLW link