Meta: Fron­tier AI Framework

Zach Stein-PerlmanFeb 3, 2025, 10:00 PM
33 points
2 comments1 min readLW link
(ai.meta.com)

$300 Fermi Model Competition

ozziegooenFeb 3, 2025, 7:47 PM
16 points
18 commentsLW link

Vi­su­al­iz­ing Interpretability

Darold DavisFeb 3, 2025, 7:36 PM
2 points
0 comments4 min readLW link

Align­ment Can Re­duce Perfor­mance on Sim­ple Eth­i­cal Questions

Daan HenselmansFeb 3, 2025, 7:35 PM
16 points
7 comments6 min readLW link

The Over­lap Paradigm: Re­think­ing Data’s Role in Weak-to-Strong Gen­er­al­iza­tion (W2SG)

Serhii ZamriiFeb 3, 2025, 7:31 PM
2 points
0 comments11 min readLW link

Sleeper agents ap­pear re­silient to ac­ti­va­tion steering

Lucy WingardFeb 3, 2025, 7:31 PM
6 points
0 comments7 min readLW link

Part 1: En­hanc­ing In­ner Align­ment in CLIP Vi­sion Trans­form­ers: Miti­gat­ing Reifi­ca­tion Bias with SAEs and Grad ECLIP

Gilber A. CorralesFeb 3, 2025, 7:30 PM
1 point
0 comments13 min readLW link

Su­per­in­tel­li­gence Align­ment Proposal

Davey MorseFeb 3, 2025, 6:47 PM
5 points
3 comments9 min readLW link

Get­tier Cases [re­post]

AntigoneFeb 3, 2025, 6:12 PM
−4 points
5 comments2 min readLW link

The Self-Refer­ence Trap in Mathematics

Alister MundayFeb 3, 2025, 4:12 PM
−41 points
23 comments2 min readLW link

Stop­ping un­al­igned LLMs is easy!

Yair HalberstadtFeb 3, 2025, 3:38 PM
−3 points
11 comments2 min readLW link

The Outer Levels

JerdleFeb 3, 2025, 2:30 PM
2 points
3 comments6 min readLW link

o3-mini Early Days

ZviFeb 3, 2025, 2:20 PM
45 points
0 comments15 min readLW link
(thezvi.wordpress.com)

OpenAI re­leases deep re­search agent

Seth HerdFeb 3, 2025, 12:48 PM
78 points
21 comments3 min readLW link
(openai.com)

Neu­ron Ac­ti­va­tions to CLIP Embed­dings: Geom­e­try of Lin­ear Com­bi­na­tions in La­tent Space

Roman MalovFeb 3, 2025, 10:30 AM
4 points
0 comments2 min readLW link

[Question] Can we in­fer the search space of a lo­cal op­ti­miser?

Lucius BushnaqFeb 3, 2025, 10:17 AM
25 points
5 comments3 min readLW link

Pick two: con­cise, com­pre­hen­sive, or clear rules

ScrewtapeFeb 3, 2025, 6:39 AM
78 points
27 comments8 min readLW link

Lan­guage Models and World Models, a Philosophy

kyjohnsoFeb 3, 2025, 2:55 AM
1 point
0 comments1 min readLW link
(hylaeansea.org)

Keep­ing Cap­i­tal is the Challenge

LTMFeb 3, 2025, 2:04 AM
13 points
2 comments17 min readLW link
(routecause.substack.com)

Use com­put­ers as pow­er­ful as in 1985 or AI con­trols hu­mans or ?

jrincaycFeb 3, 2025, 12:51 AM
3 points
0 comments2 min readLW link

Some Th­e­ses on Mo­ti­va­tional and Direc­tional Feedback

abstractapplicFeb 2, 2025, 10:50 PM
9 points
3 comments4 min readLW link

Hu­man­ity Has A Pos­si­ble 99.98% Chance Of Ex­tinc­tion

st3rlxxFeb 2, 2025, 9:46 PM
−12 points
1 comment5 min readLW link

Ex­plor­ing how Othel­loGPT com­putes its world model

JMaarFeb 2, 2025, 9:29 PM
7 points
0 comments8 min readLW link

An In­tro­duc­tion to Ev­i­den­tial De­ci­sion Theory

BabićFeb 2, 2025, 9:27 PM
5 points
2 comments10 min readLW link

“DL train­ing == hu­man learn­ing” is a bad analogy

kmanFeb 2, 2025, 8:59 PM
3 points
0 comments1 min readLW link

Con­di­tional Im­por­tance in Toy Models of Superposition

james__pFeb 2, 2025, 8:35 PM
9 points
4 comments10 min readLW link

Trac­ing Ty­pos in LLMs: My At­tempt at Un­der­stand­ing How Models Cor­rect Misspellings

Ivan DostalFeb 2, 2025, 7:56 PM
3 points
1 comment5 min readLW link

The Sim­plest Good

Jesse HooglandFeb 2, 2025, 7:51 PM
75 points
6 comments5 min readLW link

Grad­ual Disem­pow­er­ment, Shell Games and Flinches

Jan_KulveitFeb 2, 2025, 2:47 PM
129 points
36 comments6 min readLW link

Thoughts on Toy Models of Superposition

james__pFeb 2, 2025, 1:52 PM
5 points
2 comments9 min readLW link

Es­cape from Alder­aan I

lsusrFeb 2, 2025, 10:48 AM
58 points
2 comments6 min readLW link

ChatGPT: Ex­plor­ing the Digi­tal Wilder­ness, Find­ings and Prospects

Bill BenzonFeb 2, 2025, 9:54 AM
2 points
0 comments5 min readLW link

[Question] Would any­one be in­ter­ested in pur­su­ing the Virtue of Schol­ar­ship with me?

japancoloradoFeb 2, 2025, 4:02 AM
11 points
2 comments1 min readLW link

Chi­nese room AI to sur­vive the in­escapable end of com­pute governance

rotatingpaguroFeb 2, 2025, 2:42 AM
−4 points
0 comments11 min readLW link

Sea­sonal Pat­terns in BIDA’s Attendance

jefftkFeb 2, 2025, 2:40 AM
11 points
0 comments2 min readLW link
(www.jefftk.com)

AI ac­cel­er­a­tion, Deep­Seek, moral philosophy

Josh HFeb 2, 2025, 12:08 AM
2 points
0 comments12 min readLW link

False­hoods you might be­lieve about peo­ple who are at a ra­tio­nal­ist meetup

ScrewtapeFeb 1, 2025, 11:32 PM
60 points
12 comments4 min readLW link

In­ter­pret­ing au­tonomous driv­ing agents with at­ten­tion based architecture

Manav DahraFeb 1, 2025, 11:20 PM
1 point
0 comments11 min readLW link

Ra­tion­al­ist Movie Reviews

Nicholas / Heather KrossFeb 1, 2025, 11:10 PM
16 points
2 comments4 min readLW link
(www.thinkingmuchbetter.com)

Retroac­tive If-Then Commitments

MichaelDickensFeb 1, 2025, 10:22 PM
7 points
0 comments1 min readLW link

Ex­plor­ing the co­her­ence of fea­tures ex­pla­na­tions in the GemmaScope

Mattia ProiettiFeb 1, 2025, 9:28 PM
1 point
0 comments19 min readLW link

Ma­chine Un­learn­ing in Large Lan­guage Models: A Com­pre­hen­sive Sur­vey with Em­piri­cal In­sights from the Qwen 1.5 1.8B Model

RudaibaFeb 1, 2025, 9:26 PM
9 points
2 comments11 min readLW link

Towards a Science of Evals for Sycophancy

andrejfsantosFeb 1, 2025, 9:17 PM
7 points
0 comments8 min readLW link

Post AGI effect prediction

JuliezhangggFeb 1, 2025, 9:16 PM
1 point
0 comments7 min readLW link

Un­lock­ing Eth­i­cal AI and Im­prov­ing Jailbreak Defenses: Re­in­force­ment Learn­ing with Lay­ered Mor­phol­ogy (RLLM)

MiguelDevFeb 1, 2025, 7:17 PM
4 points
2 comments2 min readLW link
(www.whitehatstoic.com)

Poetic Meth­ods I: Meter as Com­mu­ni­ca­tion Protocol

adamShimiFeb 1, 2025, 6:22 PM
19 points
0 comments1 min readLW link
(formethods.substack.com)

Black­pool Ap­plied Ra­tion­al­ity Un­con­fer­ence 2025

Feb 1, 2025, 2:09 PM
6 points
0 comments7 min readLW link

[Question] How likely is an at­tempted coup in the United States in the next four years?

Alexander de VriesFeb 1, 2025, 1:12 PM
4 points
2 comments1 min readLW link

Black­pool Ap­plied Ra­tion­al­ity Un­con­fer­ence 2025

Feb 1, 2025, 1:04 PM
23 points
2 comments7 min readLW link

One-di­men­sional vs multi-di­men­sional fea­tures in interpretability

charlieoneillFeb 1, 2025, 9:10 AM
6 points
0 comments2 min readLW link