A Solu­tion for AGI/​ASI Safety

Weibing Wang18 Dec 2024 19:44 UTC
50 points
29 comments1 min readLW link

Takes on “Align­ment Fak­ing in Large Lan­guage Models”

Joe Carlsmith18 Dec 2024 18:22 UTC
105 points
7 comments62 min readLW link

A Mat­ter of Taste

Zvi18 Dec 2024 17:50 UTC
36 points
5 comments11 min readLW link
(thezvi.wordpress.com)

Are we a differ­ent per­son each time? A sim­ple ar­gu­ment for the im­per­ma­nence of our identity

l4mp18 Dec 2024 17:21 UTC
−4 points
5 comments1 min readLW link

Align­ment Fak­ing in Large Lan­guage Models

18 Dec 2024 17:19 UTC
496 points
85 comments10 min readLW link3 reviews

Can o1-pre­view find ma­jor mis­takes amongst 59 NeurIPS ’24 MLSB pa­pers?

Abhishaike Mahajan18 Dec 2024 14:21 UTC
19 points
0 comments6 min readLW link
(www.owlposting.com)

Walk­ing Sue

Matthew McRedmond18 Dec 2024 13:19 UTC
2 points
5 comments8 min readLW link

What con­clu­sions can be drawn from a sin­gle ob­ser­va­tion about wealth in ten­nis?

Trevor Cappallo18 Dec 2024 9:55 UTC
8 points
3 comments2 min readLW link

Don’t As­so­ci­ate AI Safety With Activism

Eneasz18 Dec 2024 8:01 UTC
17 points
15 comments1 min readLW link
(deathisbad.substack.com)

[Question] How should I op­ti­mize my de­ci­sion mak­ing model for ‘ideas’?

CstineSublime18 Dec 2024 4:09 UTC
3 points
0 comments4 min readLW link

Prep­pers Are Too Nega­tive on Objects

jefftk18 Dec 2024 2:30 UTC
45 points
2 comments1 min readLW link
(www.jefftk.com)

Re­view: Break­ing Free with Dr. Stone

TurnTrout18 Dec 2024 1:26 UTC
47 points
5 comments1 min readLW link
(turntrout.com)

Abla­tions for “Fron­tier Models are Ca­pable of In-con­text Schem­ing”

17 Dec 2024 23:58 UTC
116 points
1 comment2 min readLW link

Care­less think­ing: A the­ory of bad thinking

Nathan Young17 Dec 2024 18:23 UTC
49 points
17 comments9 min readLW link
(nathanpmyoung.substack.com)

The Se­cond Gemini

Zvi17 Dec 2024 15:50 UTC
23 points
0 comments11 min readLW link
(thezvi.wordpress.com)

AIS Hun­gary is hiring a part-time Tech­ni­cal Lead! (Dead­line: Dec 31st)

gergogaspar17 Dec 2024 14:12 UTC
1 point
0 comments2 min readLW link

Every­thing you care about is in the map

Tahp17 Dec 2024 14:05 UTC
17 points
27 comments3 min readLW link

Real­ity is Frac­tal-Shaped

silentbob17 Dec 2024 13:52 UTC
18 points
1 comment8 min readLW link

Try­ing to trans­late when peo­ple talk past each other

Kaj_Sotala17 Dec 2024 9:40 UTC
41 points
12 comments6 min readLW link
(kajsotala.fi)

What is “wire­head­ing”?

17 Dec 2024 7:49 UTC
10 points
0 comments1 min readLW link
(aisafety.info)

Where do you put your ideas?

CstineSublime17 Dec 2024 7:26 UTC
9 points
20 comments1 min readLW link

Ele­vat­ing Air Purifiers

jefftk17 Dec 2024 1:40 UTC
25 points
0 comments1 min readLW link
(www.jefftk.com)

A dataset of ques­tions on de­ci­sion-the­o­retic rea­son­ing in New­comb-like problems

16 Dec 2024 22:42 UTC
50 points
1 comment2 min readLW link
(arxiv.org)

A prac­ti­cal guide to tiling the uni­verse with hedonium

Vittu Perkele16 Dec 2024 21:25 UTC
−8 points
1 comment1 min readLW link
(perkeleperusing.substack.com)

AI Safety Seed Fund­ing Net­work—Join as a Donor or Investor

Alexandra Bos16 Dec 2024 19:30 UTC
30 points
0 comments2 min readLW link

I read ev­ery ma­jor AI lab’s safety plan so you don’t have to

sarahhw16 Dec 2024 18:51 UTC
20 points
0 comments12 min readLW link
(longerramblings.substack.com)

Grokking re­vis­ited: re­verse en­g­ineer­ing grokking mod­ulo ad­di­tion in LSTM

16 Dec 2024 18:48 UTC
4 points
0 comments6 min readLW link

Progress links and short notes, 2024-12-16

jasoncrawford16 Dec 2024 17:24 UTC
7 points
0 comments2 min readLW link
(newsletter.rootsofprogress.org)

Effec­tive Altru­ism FAQ

Bentham's Bulldog16 Dec 2024 16:27 UTC
0 points
7 comments12 min readLW link

Vari­ably com­press­ibly stud­ies are fun

dkl916 Dec 2024 16:00 UTC
0 points
0 comments2 min readLW link
(dkl9.net)

AIs Will In­creas­ingly At­tempt Shenanigans

Zvi16 Dec 2024 15:20 UTC
119 points
2 comments26 min readLW link
(thezvi.wordpress.com)

Test­ing which LLM ar­chi­tec­tures can do hid­den se­rial reasoning

Filip Sondej16 Dec 2024 13:48 UTC
84 points
9 comments4 min readLW link

Neu­roAI for AI safety: A Differ­en­tial Path

16 Dec 2024 13:17 UTC
22 points
0 comments7 min readLW link
(arxiv.org)

Cir­cling as prac­tice for “just be your­self”

Kaj_Sotala16 Dec 2024 7:40 UTC
87 points
6 comments4 min readLW link
(kajsotala.fi)

Re­an­a­lyz­ing the 2023 Ex­pert Sur­vey on Progress in AI

AI Impacts16 Dec 2024 6:10 UTC
8 points
0 comments1 min readLW link
(blog.aiimpacts.org)

Ideas for bench­mark­ing LLM creativity

gwern16 Dec 2024 5:18 UTC
60 points
11 comments1 min readLW link
(gwern.net)

Com­par­ing the AirFanta 3Pro to the Coway AP-1512

jefftk16 Dec 2024 1:40 UTC
13 points
0 comments1 min readLW link
(www.jefftk.com)

[Question] are IQ tests a good mea­sure of in­tel­li­gence?

KvmanThinking15 Dec 2024 23:06 UTC
0 points
5 comments1 min readLW link

Madi­son Sec­u­lar Solstice

svfritz15 Dec 2024 21:52 UTC
1 point
0 comments1 min readLW link

[Question] Is AI al­ign­ment a purely func­tional prop­erty?

Roko15 Dec 2024 21:42 UTC
13 points
8 comments1 min readLW link

[Question] How coun­ter­fac­tual are log­i­cal coun­ter­fac­tu­als?

Donald Hobson15 Dec 2024 21:16 UTC
11 points
10 comments1 min readLW link

De­bunk­ing the myth of safe AI

henophilia15 Dec 2024 17:44 UTC
−11 points
8 comments1 min readLW link
(henophilia.substack.com)

In­tro­duc­ing Avatarism: A Ra­tional Frame­work for Build­ing ac­tual Heaven

ratiba ro15 Dec 2024 17:17 UTC
2 points
2 comments2 min readLW link

A Public Choice Take on Effec­tive Altruism

vaishnav9215 Dec 2024 16:58 UTC
9 points
4 comments3 min readLW link
(www.optimaloutliers.com)

World Models I’m Cur­rently Building

temporary15 Dec 2024 16:29 UTC
5 points
1 comment1 min readLW link
(samuelshadrach.com)

Dress Up For Sec­u­lar Solstice

Gordon H.S.15 Dec 2024 16:28 UTC
33 points
13 comments7 min readLW link

Remap your caps lock key

bilalchughtai15 Dec 2024 14:03 UTC
82 points
21 comments1 min readLW link

Effec­tive Evil’s AI Misal­ign­ment Plan

lsusr15 Dec 2024 7:39 UTC
83 points
9 comments3 min readLW link

How to Edit an Es­say into a Sols­tice Speech?

Czynski15 Dec 2024 4:30 UTC
5 points
1 comment1 min readLW link
(thepdv.wordpress.com)

How Your Phys­iol­ogy Affects the Mind’s Pro­jec­tion Fallacy

YanLyutnev14 Dec 2024 21:10 UTC
−1 points
0 comments6 min readLW link