De­sign­ing a Job Dis­place­ment Model

claywren14 Dec 2025 22:23 UTC
22 points
0 comments19 min readLW link

A high in­tegrity/​epistemics poli­ti­cal coal­i­tion?

Raemon14 Dec 2025 22:21 UTC
148 points
34 comments13 min readLW link

Fan­ning Radiators

jefftk14 Dec 2025 21:10 UTC
14 points
0 comments1 min readLW link
(www.jefftk.com)

Ab­strac­tion as a gen­er­al­iza­tion of al­gorith­mic Markov condition

Daniel C14 Dec 2025 18:55 UTC
8 points
0 comments7 min readLW link

No, Amer­i­cans Don’t Think For­eign Aid Is 26% of the Budget

Julius14 Dec 2025 18:47 UTC
67 points
17 comments5 min readLW link
(thegreymatter.substack.com)

A Life That Can­not Be A Failure

Bentham's Bulldog14 Dec 2025 16:40 UTC
−7 points
0 comments5 min readLW link

Should LLMs ac­cept in­vites to Ep­stein’s is­land?

Lukas Petersson14 Dec 2025 15:21 UTC
5 points
0 comments1 min readLW link
(lukaspetersson.com)

The Ax­iom of Choice is Not Controversial

GenericModel14 Dec 2025 4:08 UTC
42 points
29 comments7 min readLW link
(enrichedjamsham.substack.com)

Open Source Repli­ca­tion of the Au­dit­ing Game Model Organism

abhayesian14 Dec 2025 2:10 UTC
20 points
0 comments1 min readLW link
(alignment.anthropic.com)

Why did I be­lieve Oliver Sacks?

Eye You13 Dec 2025 23:39 UTC
69 points
17 comments1 min readLW link

In Fa­vor of Inkhaven-But-Less

Alice Blair13 Dec 2025 23:16 UTC
26 points
6 comments2 min readLW link

Micro-vi­sions for AI-pow­ered on­line content

Alexandre Variengien13 Dec 2025 23:05 UTC
11 points
0 comments8 min readLW link
(alexandrevariengien.com)

When is it Worth Work­ing?

foodforthought13 Dec 2025 21:40 UTC
23 points
1 comment6 min readLW link

[Question] What does “lat­tice of ab­strac­tion” mean?

Adam Zerner13 Dec 2025 21:19 UTC
11 points
8 comments1 min readLW link

Filler to­kens don’t al­low se­quen­tial reasoning

Brendan Long13 Dec 2025 20:22 UTC
74 points
5 comments1 min readLW link

Hog­warts’ 2025 Win­ter Sec­u­lar Sols­tice Celebration

Espedair Street13 Dec 2025 19:55 UTC
3 points
0 comments1 min readLW link

Toss a bit­coin to your Light­cone – LW + Lighthaven’s 2026 fundraiser

habryka13 Dec 2025 19:32 UTC
310 points
129 comments52 min readLW link

“Smarter brains run on sparsely con­nected neu­rons”

Pjain13 Dec 2025 18:27 UTC
12 points
0 comments1 min readLW link
(www.bio.psy.ruhr-uni-bochum.de)

Whack-a-mole: gen­er­al­i­sa­tion re­sis­tance could be fa­cil­i­tated by train­ing-dis­tri­bu­tion imprintation

lennie13 Dec 2025 17:46 UTC
23 points
0 comments14 min readLW link

Za­greb meetup: win­ter 2025

dominicq13 Dec 2025 17:10 UTC
5 points
0 comments1 min readLW link

Why the con­cept of AI al­ign­ment as it is cur­rently for­mu­lated is morally trou­bling

Horosphere13 Dec 2025 14:37 UTC
5 points
3 comments5 min readLW link

You Can Just Buy Far-UVC

jefftk13 Dec 2025 13:10 UTC
123 points
26 comments1 min readLW link
(www.jefftk.com)

How I stopped be­ing sure LLMs are just mak­ing up their in­ter­nal ex­pe­rience (but the topic is still con­fus­ing)

Kaj_Sotala13 Dec 2025 12:38 UTC
199 points
67 comments29 min readLW link

The Inevitable Evolu­tion of AI Agents

Steven McCulloch13 Dec 2025 11:51 UTC
32 points
1 comment9 min readLW link

Per­ma­nently Padding a Suitcase

jefftk13 Dec 2025 3:10 UTC
11 points
0 comments1 min readLW link
(www.jefftk.com)

Is the Con­structible Uni­verse All There Is?

GenericModel13 Dec 2025 2:29 UTC
18 points
2 comments8 min readLW link
(enrichedjamsham.substack.com)

Wages un­der superintelligence

Zachary Brown13 Dec 2025 0:06 UTC
33 points
1 comment1 min readLW link
(beforeporcelain.substack.com)

Trust is Nei­ther Scalar Nor a Snapshot

phoenix13 Dec 2025 0:05 UTC
6 points
3 comments3 min readLW link

Su­per­weight Da­m­age Re­pair in OLMo-1B uti­liz­ing a Sin­gle Row Patch (CPU-only Ex­per­i­ment)

sunmoonron13 Dec 2025 0:03 UTC
12 points
2 comments2 min readLW link

What’s guaran­teed in Life?

shanzson12 Dec 2025 21:27 UTC
−6 points
3 comments1 min readLW link

Eval­u­at­ing LLM hy­poth­e­sis gen­er­a­tion in biol­ogy is hard.

Austin Morrissey12 Dec 2025 20:48 UTC
1 point
0 comments6 min readLW link

Book Re­view: The Age of Fight­ing Sail

Suspended Reason12 Dec 2025 20:15 UTC
63 points
3 comments12 min readLW link

U.S. Democ­racy Threat In­dex: $10,000 in Fore­cast­ing Prizes

ChristianWilliams12 Dec 2025 19:43 UTC
4 points
0 comments1 min readLW link

Con­di­tional On Long-Range Sig­nal, Is­ing Still Fac­tors Locally

12 Dec 2025 19:31 UTC
31 points
2 comments6 min readLW link

Lead­ing mod­els take chilling trade­offs in re­al­is­tic sce­nar­ios, new re­search finds

Mordechai Rorvig12 Dec 2025 18:27 UTC
−2 points
4 comments1 min readLW link
(www.foommagazine.org)

Wittgen­stein was wrong

Ben (Berlin)12 Dec 2025 16:33 UTC
3 points
1 comment2 min readLW link

Monthly Roundup #37: De­cem­ber 2026

Zvi12 Dec 2025 14:40 UTC
28 points
4 comments22 min readLW link
(thezvi.wordpress.com)

In­for­ma­tion in cir­cu­la­tion is self-or­ganised crit­i­cal. Small changes in en­vi­ron­ment can make large, dis­con­tin­u­ous changes in the in­for­ma­tion space.

puffymist12 Dec 2025 14:16 UTC
8 points
2 comments1 min readLW link

AI self-repli­ca­tion roundup

tbs12 Dec 2025 14:15 UTC
4 points
0 comments18 min readLW link
(meditationsondigitalminds.substack.com)

The Fly Farm

J Bostock12 Dec 2025 13:24 UTC
31 points
0 comments7 min readLW link

New 80k prob­lem pro­file: ex­treme power concentration

rosehadshar12 Dec 2025 13:05 UTC
48 points
12 comments4 min readLW link

The point of view of the universe

Alexandre Variengien12 Dec 2025 12:00 UTC
14 points
0 comments2 min readLW link
(alexandrevariengien.com)

The Fan­tas­tic Piece of Tin­foil in my Wallet

jefftk12 Dec 2025 3:30 UTC
59 points
3 comments1 min readLW link
(www.jefftk.com)

AISN #66: Eval­u­at­ing Fron­tier Models, New Gem­ini and Claude, Preemp­tion is Back

Nick_Stockton12 Dec 2025 3:10 UTC
4 points
0 comments5 min readLW link
(aisafety.substack.com)

An­nals of Coun­ter­fac­tual Han

GenericModel12 Dec 2025 1:11 UTC
48 points
0 comments6 min readLW link
(enrichedjamsham.substack.com)

Does dis­solv­ing new­comb’s para­dox mat­ter?

Srdjan Miletic12 Dec 2025 1:06 UTC
15 points
6 comments2 min readLW link
(www.dissent.blog)

ARC-AGI-2 hu­man baseline sur­passed (up­dated)

Tim H12 Dec 2025 0:10 UTC
21 points
2 comments2 min readLW link

De­sign­ing the World’s Safest AI based on Mo­ral­ity Models

shanzson11 Dec 2025 23:33 UTC
−12 points
0 comments22 min readLW link

ASI Already Knows About Tor­ture—In Defense of Talk­ing Openly About S-Risks

KatWoods11 Dec 2025 21:15 UTC
−9 points
0 comments2 min readLW link

Cog­ni­tive Tech from Al­gorith­mic In­for­ma­tion Theory

Cole Wyeth11 Dec 2025 20:32 UTC
41 points
9 comments1 min readLW link