Up­com­ing Work­shop on Post-AGI Eco­nomics, Cul­ture, and Governance

28 Oct 2025 21:55 UTC
37 points
1 comment2 min readLW link

AI Craz­i­ness Miti­ga­tion Efforts

Zvi28 Oct 2025 19:00 UTC
37 points
5 comments11 min readLW link
(thezvi.wordpress.com)

When Will AI Trans­form the Econ­omy?

Andre.Infante28 Oct 2025 18:55 UTC
60 points
2 comments8 min readLW link

In­tro­duc­ing the Epoch Ca­pa­bil­ities In­dex (ECI)

28 Oct 2025 18:23 UTC
65 points
9 comments1 min readLW link
(epoch.ai)

Mottes and Baileys in AI discourse

Raemon28 Oct 2025 17:50 UTC
51 points
9 comments9 min readLW link

Tem­porar­ily Los­ing My Ego

Logan Riggs28 Oct 2025 16:41 UTC
21 points
4 comments3 min readLW link

The Memet­ics of AI Successionism

Jan_Kulveit28 Oct 2025 15:04 UTC
212 points
54 comments9 min readLW link

New 80,000 Hours prob­lem pro­file on the risks of power-seek­ing AI

Zershaaneh Qureshi28 Oct 2025 14:37 UTC
7 points
0 comments2 min readLW link

LLM robots can’t pass but­ter (and they are hav­ing an ex­is­ten­tial crisis about it)

Lukas Petersson28 Oct 2025 14:14 UTC
105 points
7 comments4 min readLW link

Call for men­tors from AI Safety and academia. Sci.STEPS men­tor­ship program

Valentin202628 Oct 2025 13:41 UTC
7 points
0 comments2 min readLW link

Heuris­tics for as­sess­ing how much of a bub­ble AI is in/​will be

Remmelt28 Oct 2025 8:08 UTC
8 points
2 comments2 min readLW link
(www.wired.com)

Q2 AI Bench­mark Re­sults: Pros Main­tain Clear Lead

28 Oct 2025 5:40 UTC
14 points
0 comments24 min readLW link
(www.metaculus.com)

A Sketch of Helpful­ness The­ory With Equiv­o­cal Principals

Lorxus28 Oct 2025 4:11 UTC
7 points
1 comment6 min readLW link
(tiled-with-pentagons.blogspot.com)

Ra­tional Emo­tivism

Notelrac28 Oct 2025 3:17 UTC
1 point
0 comments6 min readLW link

Paper: Take Good­hart Se­ri­ously: Prin­ci­pled Limit on Gen­eral-Pur­pose AI Optimization

antmaier28 Oct 2025 2:55 UTC
13 points
0 comments1 min readLW link
(arxiv.org)

What were mis­takes of AI Safety field-build­ing? How can we avoid them while we build the AI Welfare?

Güney Türker28 Oct 2025 2:50 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

Re­solv­ing New­comb’s Prob­lem Perfect Pre­dic­tor Case

Praphull Kabtiyal28 Oct 2025 2:45 UTC
7 points
1 comment19 min readLW link

[CS 2881r] Can We Prompt Our Way to Safety? Com­par­ing Sys­tem Prompt Styles and Post-Train­ing Effects on Safety Benchmarks

hughvd28 Oct 2025 2:38 UTC
4 points
0 comments8 min readLW link

Flour­ish: Hu­man–AI Unconference

Alessandro Pedori28 Oct 2025 2:26 UTC
3 points
0 comments3 min readLW link

All the labs AI safety plans: 2025 edition

Algon28 Oct 2025 0:25 UTC
49 points
2 comments16 min readLW link
(aisafety.info)

A Bayesian Ex­pla­na­tion of Causal Models

Menotim27 Oct 2025 23:16 UTC
2 points
0 comments25 min readLW link

Brain­storm­ing Food on the Cheap + Healthy + Con­ve­nient + Edible Frontier

Morpheus27 Oct 2025 23:04 UTC
19 points
3 comments4 min readLW link

Trans­ac­tional method for non-trans­ac­tional re­la­tion­ship: Re­la­tion­ship as a Com­mon-pool Re­source problem

David H.27 Oct 2025 22:29 UTC
2 points
0 comments7 min readLW link

[Question] How Im­por­tant is In­vert­ing LLMs?

Maloew27 Oct 2025 20:59 UTC
8 points
1 comment1 min readLW link

Ask­ing (Some Of) The Right Questions

Zvi27 Oct 2025 19:00 UTC
31 points
3 comments14 min readLW link
(thezvi.wordpress.com)

life les­sons from trading

thiccythot27 Oct 2025 16:56 UTC
43 points
3 comments4 min readLW link

Agen­tic Mon­i­tor­ing for AI Control

LAThomson27 Oct 2025 16:38 UTC
9 points
0 comments9 min readLW link

Model Pa­ram­e­ters as a Stegano­graphic Pri­vate Channel

Lennart Finke27 Oct 2025 16:08 UTC
9 points
0 comments5 min readLW link

Ma­jor sur­vey on the HS/​TS spec­trum and gAyGP

tailcalled27 Oct 2025 14:31 UTC
22 points
3 comments8 min readLW link

Death of the Author

J Bostock27 Oct 2025 12:35 UTC
5 points
0 comments3 min readLW link

Ex­plor­ing the multi-di­men­sional re­fusal sub­space in rea­son­ing models

Le magicien quantique27 Oct 2025 9:03 UTC
5 points
2 comments4 min readLW link

AIs should also re­fuse to work on ca­pa­bil­ities research

Davidmanheim27 Oct 2025 8:42 UTC
150 points
20 comments3 min readLW link

Un­com­mon Utili­tar­i­anism #3: Bounded Utility Functions

Alice Blair27 Oct 2025 5:06 UTC
16 points
10 comments6 min readLW link

List of lists of pro­ject ideas in AI Safety

Veronica Gordi27 Oct 2025 1:28 UTC
6 points
0 comments14 min readLW link
(www.notion.so)

[Question] How valuable is money-in-mar­ket?

Hruss27 Oct 2025 0:47 UTC
6 points
1 comment1 min readLW link

Credit goes to the pre­sen­ter, not the inventor

Algon26 Oct 2025 23:55 UTC
42 points
5 comments3 min readLW link

On Flesh­ling Safety: A De­bate by Klurl and Tra­pau­cius.

Eliezer Yudkowsky26 Oct 2025 23:44 UTC
253 points
52 comments79 min readLW link

Re­sults of “Ex­per­i­ment on Bernoulli pro­cesses”

joseph_c26 Oct 2025 21:47 UTC
9 points
2 comments4 min readLW link

cer­tain ex­otic neu­ro­trans­mit­ters as SMART PILLS: or com­pounds that in­crease the ca­pac­ity for men­tal work in humans

azergante26 Oct 2025 20:51 UTC
4 points
0 comments22 min readLW link
(erowid.org)

Cancer has a sur­pris­ing amount of detail

Abhishaike Mahajan26 Oct 2025 20:33 UTC
127 points
18 comments11 min readLW link
(www.owlposting.com)

Sta­bil­ity of nat­u­ral la­tents in in­for­ma­tion the­o­retic terms

Aram Ebtekar26 Oct 2025 20:33 UTC
35 points
0 comments2 min readLW link

Les­sons from Teach­ing Ra­tion­al­ity to EAs in the Netherlands

Shoshannah Tekofsky26 Oct 2025 20:03 UTC
20 points
0 comments7 min readLW link
(forum.effectivealtruism.org)

Are We Their Chimps?

soycarts26 Oct 2025 16:04 UTC
−7 points
49 comments1 min readLW link

FWIW: What I no­ticed at a (Goenka) Vi­pas­sana retreat

David Gross26 Oct 2025 15:10 UTC
38 points
4 comments9 min readLW link

Brightline is Ac­tu­ally Pretty Dangerous

jefftk26 Oct 2025 12:51 UTC
53 points
12 comments3 min readLW link
(www.jefftk.com)

Seven-ish Words from My Thought-Language

Lorxus26 Oct 2025 4:30 UTC
68 points
13 comments4 min readLW link
(tiled-with-pentagons.blogspot.com)

Remembrancy

Algon25 Oct 2025 22:47 UTC
11 points
0 comments3 min readLW link

Pyg­mal­ion’s Wafer

Charlie Sanders25 Oct 2025 20:17 UTC
8 points
2 comments4 min readLW link
(www.dailymicrofiction.com)

De­bat­ing theism

Ivan25 Oct 2025 18:35 UTC
−21 points
0 comments25 min readLW link

[Question] Why is OpenAI re­leas­ing prod­ucts like Sora and At­las?

J Thomas Moros25 Oct 2025 17:59 UTC
16 points
10 comments1 min readLW link