Gizmo Watch Review

jefftkJun 18, 2024, 8:00 PM
22 points
5 comments6 min readLW link
(www.jefftk.com)

Boy­cott OpenAI

PeterMcCluskeyJun 18, 2024, 7:52 PM
164 points
26 comments1 min readLW link
(bayesianinvestor.com)

Lov­ing a world you don’t trust

Joe CarlsmithJun 18, 2024, 7:31 PM
135 points
13 comments33 min readLW link

Book re­view: the Iliad

philhJun 18, 2024, 6:50 PM
31 points
2 comments14 min readLW link
(reasonableapproximation.net)

AI Safety Newslet­ter #37: US Launches An­titrust In­ves­ti­ga­tions Plus, re­cent crit­i­cisms of OpenAI and An­thropic, and a sum­mary of Si­tu­a­tional Awareness

Jun 18, 2024, 6:07 PM
8 points
0 comments5 min readLW link
(newsletter.safe.ai)

Suffer­ing Is Not Pain

jbkjrJun 18, 2024, 6:04 PM
34 points
45 comments5 min readLW link
(jbkjr.me)

Lam­ini’s Tar­geted Hal­lu­ci­na­tion Re­duc­tion May Be a Big Deal for Job Automation

sweenesmJun 18, 2024, 3:29 PM
3 points
0 comments1 min readLW link

On Deep­Mind’s Fron­tier Safety Framework

ZviJun 18, 2024, 1:30 PM
37 points
4 comments8 min readLW link
(thezvi.wordpress.com)

[Linkpost] Tran­scen­dence: Gen­er­a­tive Models Can Out­perform The Ex­perts That Train Them

Bogdan Ionut CirsteaJun 18, 2024, 11:00 AM
19 points
3 comments1 min readLW link
(arxiv.org)

I would have shit in that alley, too

Declan MolonyJun 18, 2024, 4:41 AM
462 points
134 comments4 min readLW link

[Question] The thing I don’t un­der­stand about AGI

Jeremy KalfusJun 18, 2024, 4:25 AM
7 points
12 comments1 min readLW link

Cal­ling My Se­cond Fam­ily Dance

jefftkJun 18, 2024, 2:20 AM
11 points
0 comments1 min readLW link
(www.jefftk.com)

LLM-Se­cured Sys­tems: A Gen­eral-Pur­pose Tool For Struc­tured Transparency

ozziegooenJun 18, 2024, 12:21 AM
10 points
1 commentLW link

D&D.Sci Alchemy: Arch­mage Anachronos and the Sup­ply Chain Is­sues Eval­u­a­tion & Ruleset

aphyerJun 17, 2024, 9:29 PM
51 points
11 comments6 min readLW link

Ques­tion­able Nar­ra­tives of “Si­tu­a­tional Aware­ness”

fergusqJun 17, 2024, 9:01 PM
0 points
1 comment1 min readLW link
(forum.effectivealtruism.org)

ZuVillage Ge­or­gia – Mis­sion Statement

BurnsJun 17, 2024, 7:53 PM
3 points
3 comments9 min readLW link

Get­ting 50% (SoTA) on ARC-AGI with GPT-4o

ryan_greenblattJun 17, 2024, 6:44 PM
263 points
50 comments13 min readLW link

Sy­co­phancy to sub­ter­fuge: In­ves­ti­gat­ing re­ward tam­per­ing in large lan­guage models

Jun 17, 2024, 6:41 PM
161 points
22 comments8 min readLW link
(arxiv.org)

La­bor Par­ti­ci­pa­tion is a High-Pri­or­ity AI Align­ment Risk

alexJun 17, 2024, 6:09 PM
6 points
0 comments17 min readLW link

Towards a Less Bul­lshit Model of Semantics

Jun 17, 2024, 3:51 PM
94 points
44 comments21 min readLW link

Analysing Ad­ver­sar­ial At­tacks with Lin­ear Probing

Jun 17, 2024, 2:16 PM
9 points
0 comments8 min readLW link

What’s the fu­ture of AI hard­ware?

Itay DreyfusJun 17, 2024, 1:05 PM
2 points
0 comments8 min readLW link
(productidentity.co)

OpenAI #8: The Right to Warn

ZviJun 17, 2024, 12:00 PM
97 points
8 comments34 min readLW link
(thezvi.wordpress.com)

Logit Prisms: De­com­pos­ing Trans­former Out­puts for Mechanis­tic Interpretability

ntt123Jun 17, 2024, 11:46 AM
5 points
4 comments6 min readLW link
(neuralblog.github.io)

Weak AGIs Kill Us First

yrimonJun 17, 2024, 11:13 AM
15 points
4 comments9 min readLW link

[Linkpost] Guardian ar­ti­cle cov­er­ing Light­cone In­fras­truc­ture, Man­i­fest and CFAR ties to FTX

ROMJun 17, 2024, 10:05 AM
8 points
9 comments1 min readLW link
(www.theguardian.com)

Fat Tails Dis­cour­age Compromise

niplavJun 17, 2024, 9:39 AM
53 points
5 comments1 min readLW link

Our In­tu­itions About The Crim­i­nal Jus­tice Sys­tem Are Screwed Up

omnizoidJun 17, 2024, 6:22 AM
12 points
15 comments4 min readLW link

A Case for Co­op­er­a­tion: Depen­dence in the Pri­soner’s Dilemma

grantstengerJun 17, 2024, 1:10 AM
10 points
3 comments23 min readLW link

De­gen­era­cies are sticky for SGD

Jun 16, 2024, 9:19 PM
56 points
1 comment16 min readLW link

YM’s Shortform

YMJun 16, 2024, 8:57 PM
3 points
1 comment1 min readLW link

“Is-Ought” is Fraught

MiSteR KitttyJun 16, 2024, 5:27 PM
−5 points
2 comments1 min readLW link

The type of AI hu­man­ity has cho­sen to cre­ate so far is un­safe, for soft so­cial rea­sons and not tech­ni­cal ones.

l8cJun 16, 2024, 1:31 PM
−6 points
2 comments1 min readLW link

Self-Con­trol of LLM Be­hav­iors by Com­press­ing Suffix Gra­di­ent into Pre­fix Controller

Henry CaiJun 16, 2024, 1:01 PM
7 points
0 comments7 min readLW link
(arxiv.org)

CIV: a story

Richard_NgoJun 15, 2024, 10:36 PM
98 points
6 comments9 min readLW link
(www.narrativeark.xyz)

Yann LeCun: We only de­sign ma­chines that min­i­mize costs [there­fore they are safe]

tailcalledJun 15, 2024, 5:25 PM
19 points
8 comments1 min readLW link
(twitter.com)

(Ap­pet­i­tive, Con­sum­ma­tory) ≈ (RL, re­flex)

Steven ByrnesJun 15, 2024, 3:57 PM
38 points
1 comment3 min readLW link

Two LessWrong speed friend­ing experiments

Jun 15, 2024, 10:52 AM
52 points
3 comments4 min readLW link

Claude’s dark spiritual AI futurism

jessicataJun 15, 2024, 12:57 AM
22 points
7 comments43 min readLW link
(unstableontology.com)

[Question] When is “un­falsifi­able im­plies false” in­cor­rect?

VojtaKovarikJun 15, 2024, 12:28 AM
3 points
11 comments1 min readLW link

MIRI’s June 2024 Newsletter

HarlanJun 14, 2024, 11:02 PM
74 points
20 comments2 min readLW link
(intelligence.org)

Lan­guage for Goal Mis­gen­er­al­iza­tion: Some For­mal­isms from my MSc Thesis

GiulioJun 14, 2024, 7:35 PM
10 points
0 comments8 min readLW link
(www.giuliostarace.com)

Shard The­ory—is it true for hu­mans?

RishikaJun 14, 2024, 7:21 PM
71 points
7 comments15 min readLW link

When fine-tun­ing fails to elicit GPT-3.5′s chess abilities

Theodore ChapmanJun 14, 2024, 6:50 PM
42 points
3 comments9 min readLW link

Re­sults from the AI x Democ­racy Re­search Sprint

Jun 14, 2024, 4:40 PM
13 points
0 comments6 min readLW link

Ra­tional An­i­ma­tions’ in­tro to mechanis­tic interpretability

WriterJun 14, 2024, 4:10 PM
45 points
1 comment11 min readLW link
(youtu.be)

Why keep a di­ary, and why wish for large lan­guage models

DanielFilanJun 14, 2024, 4:10 PM
9 points
1 comment2 min readLW link
(danielfilan.com)

The Leopold Model: Anal­y­sis and Reactions

ZviJun 14, 2024, 3:10 PM
109 points
19 comments57 min readLW link
(thezvi.wordpress.com)

[Question] Thoughts on Fran­cois Chol­let’s be­lief that LLMs are far away from AGI?

O OJun 14, 2024, 6:32 AM
26 points
17 comments1 min readLW link

Re­search Re­port: Alter­na­tive spar­sity meth­ods for sparse au­toen­coders with Othel­loGPT.

Andrew QuaisleyJun 14, 2024, 12:57 AM
17 points
5 comments12 min readLW link