Come join Dove­tail’s agent foun­da­tions fel­low­ship talks & discussion

Alex_Altair15 Feb 2025 22:10 UTC
24 points
0 comments1 min readLW link

Quan­tify­ing the Qual­i­ta­tive: Towards a Bayesian Ap­proach to Per­sonal Insight

Pruthvi Kumar15 Feb 2025 19:50 UTC
1 point
0 comments6 min readLW link

Knit­ting a Sweater in a Burn­ing House

CrimsonChin15 Feb 2025 19:50 UTC
27 points
2 comments2 min readLW link

Microplas­tics: Much Less Than You Wanted To Know

15 Feb 2025 19:08 UTC
82 points
8 comments13 min readLW link

Prefer­ence for un­cer­tainty and im­pact over­es­ti­ma­tion bias in al­tru­is­tic sys­tems.

Luck15 Feb 2025 12:27 UTC
1 point
0 comments1 min readLW link

Ar­tifi­cial Static Place In­tel­li­gence: Guaran­teed Alignment

ank15 Feb 2025 11:08 UTC
2 points
2 comments2 min readLW link

The cur­rent AI strate­gic land­scape: one bear’s perspective

Matrice Jacobine15 Feb 2025 9:49 UTC
11 points
0 comments2 min readLW link
(philosophybear.substack.com)

6 (Po­ten­tial) Mis­con­cep­tions about AI Intellectuals

ozziegooen14 Feb 2025 23:51 UTC
18 points
11 comments12 min readLW link

[Question] Should Open Philan­thropy Make an Offer to Buy OpenAI?

peterr14 Feb 2025 23:18 UTC
25 points
1 comment1 min readLW link

A com­pu­ta­tional no-co­in­ci­dence principle

Eric Neyman14 Feb 2025 21:39 UTC
148 points
39 comments6 min readLW link
(www.alignment.org)

Hope­ful hy­poth­e­sis, the Per­sona Juke­box.

Donald Hobson14 Feb 2025 19:24 UTC
11 points
4 comments3 min readLW link

In­tro­duc­tion to Ex­pected Value Fanaticism

Petra Kosonen14 Feb 2025 19:05 UTC
9 points
8 comments1 min readLW link
(utilitarianism.net)

In­trin­sic Di­men­sion of Prompts in LLMs

Karthik Viswanathan14 Feb 2025 19:02 UTC
3 points
0 comments4 min readLW link

Ob­jec­tive Real­ism: A Per­spec­tive Beyond Hu­man Constructs

Apatheos14 Feb 2025 19:02 UTC
−12 points
1 comment2 min readLW link

A short course on AGI safety from the GDM Align­ment team

14 Feb 2025 15:43 UTC
104 points
2 comments1 min readLW link
(deepmindsafetyresearch.medium.com)

The Mask Comes Off: A Trio of Tales

Zvi14 Feb 2025 15:30 UTC
81 points
1 comment13 min readLW link
(thezvi.wordpress.com)

Celtic Knots on a hex lattice

Ben14 Feb 2025 14:29 UTC
27 points
10 comments2 min readLW link

Bi­modal AI Beliefs

Adam Train14 Feb 2025 6:45 UTC
6 points
1 comment4 min readLW link

What is a cir­cuit? [in in­ter­pretabil­ity]

Yudhister Kumar14 Feb 2025 4:40 UTC
23 points
1 comment1 min readLW link

Sys­tem­atic Sand­bag­ging Eval­u­a­tions on Claude 3.5 Sonnet

farrelmahaztra14 Feb 2025 1:22 UTC
13 points
0 comments1 min readLW link
(farrelmahaztra.com)

Para­noia, Cog­ni­tive Bi­ases, and Catas­trophic Thought Pat­terns.

Spiritus Dei14 Feb 2025 0:13 UTC
−4 points
1 comment6 min readLW link

Notes on the Pres­i­den­tial Elec­tion of 1836

Arjun Panickssery13 Feb 2025 23:40 UTC
23 points
0 comments7 min readLW link
(arjunpanickssery.substack.com)

Static Place AI Makes Agen­tic AI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ank13 Feb 2025 22:35 UTC
1 point
2 comments11 min readLW link

I’m mak­ing a ttrpg about life in an in­ten­tional com­mu­nity dur­ing the last year be­fore the Sin­gu­lar­ity

bgaesop13 Feb 2025 21:54 UTC
11 points
2 comments2 min readLW link

SWE Au­toma­tion Is Com­ing: Con­sider Sel­ling Your Crypto

A_donor13 Feb 2025 20:17 UTC
12 points
8 comments1 min readLW link

≤10-year Timelines Re­main Un­likely De­spite Deep­Seek and o3

Rafael Harth13 Feb 2025 19:21 UTC
52 points
67 comments15 min readLW link

Sys­tem 2 Alignment

Seth Herd13 Feb 2025 19:17 UTC
35 points
0 comments22 min readLW link

Mur­der plots are infohazards

Chris Monteiro13 Feb 2025 19:15 UTC
311 points
44 comments2 min readLW link

Sparse Au­toen­coder Fea­ture Abla­tion for Unlearning

aludert13 Feb 2025 19:13 UTC
3 points
0 comments11 min readLW link

What is it to solve the al­ign­ment prob­lem?

Joe Carlsmith13 Feb 2025 18:42 UTC
31 points
6 comments19 min readLW link
(joecarlsmith.substack.com)

Self-di­alogue: Do be­hav­iorist re­wards make schem­ing AGIs?

Steven Byrnes13 Feb 2025 18:39 UTC
43 points
1 comment46 min readLW link

How do we solve the al­ign­ment prob­lem?

Joe Carlsmith13 Feb 2025 18:27 UTC
63 points
9 comments7 min readLW link
(joecarlsmith.substack.com)

Am­bigu­ous out-of-dis­tri­bu­tion gen­er­al­iza­tion on an al­gorith­mic task

13 Feb 2025 18:24 UTC
83 points
6 comments11 min readLW link

Teach­ing AI to rea­son: this year’s most im­por­tant story

Benjamin_Todd13 Feb 2025 17:40 UTC
10 points
0 comments10 min readLW link
(benjamintodd.substack.com)

AI #103: Show Me the Money

Zvi13 Feb 2025 15:20 UTC
30 points
9 comments58 min readLW link
(thezvi.wordpress.com)

OpenAI’s NSFW policy: user safety, harm re­duc­tion, and AI consent

8e913 Feb 2025 13:59 UTC
4 points
3 comments2 min readLW link

Stud­ies of Hu­man Er­ror Rate

tin48213 Feb 2025 13:43 UTC
15 points
3 comments1 min readLW link

the dumb­est the­ory of everything

lostinwilliamsburg13 Feb 2025 7:57 UTC
−1 points
0 comments7 min readLW link

Skep­ti­cism to­wards claims about the views of pow­er­ful institutions

tlevin13 Feb 2025 7:40 UTC
46 points
2 comments4 min readLW link

Virtue sig­nal­ing, and the “hu­mans-are-won­der­ful” bias, as a trust exercise

lc13 Feb 2025 6:59 UTC
44 points
16 comments4 min readLW link

My model of what is go­ing on with LLMs

Cole Wyeth13 Feb 2025 3:43 UTC
110 points
49 comments7 min readLW link

Not all ca­pa­bil­ities will be cre­ated equal: fo­cus on strate­gi­cally su­per­hu­man agents

benwr13 Feb 2025 1:24 UTC
62 points
9 comments3 min readLW link

LLMs can teach them­selves to bet­ter pre­dict the future

Ben Turtel13 Feb 2025 1:01 UTC
0 points
1 comment1 min readLW link
(arxiv.org)

Dove­tail’s agent foun­da­tions fel­low­ship talks & discussion

Alex_Altair13 Feb 2025 0:49 UTC
10 points
0 comments1 min readLW link

Ex­tended anal­ogy be­tween hu­mans, cor­po­ra­tions, and AIs.

Daniel Kokotajlo13 Feb 2025 0:03 UTC
36 points
2 comments6 min readLW link

Mo­ral Hazard in Demo­cratic Voting

lsusr12 Feb 2025 23:17 UTC
20 points
8 comments1 min readLW link

MATS Spring 2024 Ex­ten­sion Retrospective

12 Feb 2025 22:43 UTC
26 points
1 comment15 min readLW link

Hunt­ing for AI Hack­ers: LLM Agent Honeypot

12 Feb 2025 20:29 UTC
35 points
0 comments5 min readLW link
(www.apartresearch.com)

Prob­a­bil­ity of AI-Caused Disaster

Alvin Ånestrand12 Feb 2025 19:40 UTC
2 points
2 comments10 min readLW link
(forecastingaifutures.substack.com)

Two flaws in the Machi­avelli Benchmark

TheManxLoiner12 Feb 2025 19:34 UTC
24 points
0 comments3 min readLW link