Nick Land: Orthogonality

lumpenspaceFeb 4, 2025, 9:07 PM
12 points
37 comments8 min readLW link

What work­ing on AI safety taught me about B2B SaaS sales

purple fireFeb 4, 2025, 8:50 PM
7 points
12 comments5 min readLW link

Sub­jec­tive Nat­u­ral­ism in De­ci­sion The­ory: Sav­age vs. Jeffrey–Bolker

Feb 4, 2025, 8:34 PM
45 points
22 comments5 min readLW link

Anti-Slop In­ter­ven­tions?

abramdemskiFeb 4, 2025, 7:50 PM
76 points
33 comments6 min readLW link

Can Per­sua­sion Break AI Safety? Ex­plor­ing the In­ter­play Between Fine-Tun­ing, At­tacks, and Guardrails

Devina JainFeb 4, 2025, 7:10 PM
3 points
0 comments10 min readLW link

[Question] Jour­nal­ism stu­dent look­ing for sources

pinkertonFeb 4, 2025, 6:58 PM
11 points
3 comments1 min readLW link

We’re in Deep Research

ZviFeb 4, 2025, 5:20 PM
45 points
2 comments20 min readLW link
(thezvi.wordpress.com)

The Cap­i­tal­ist Agent

henophiliaFeb 4, 2025, 3:32 PM
1 point
10 comments3 min readLW link
(blog.hermesloom.org)

Fore­cast­ing AGI: In­sights from Pre­dic­tion Mar­kets and Metaculus

Alvin ÅnestrandFeb 4, 2025, 1:03 PM
13 points
0 comments4 min readLW link
(forecastingaifutures.substack.com)

Rul­ing Out Lookup Tables

Alfred HarwoodFeb 4, 2025, 10:39 AM
22 points
11 comments7 min readLW link

Half-baked idea: a straight­for­ward method for learn­ing en­vi­ron­men­tal goals?

Q HomeFeb 4, 2025, 6:56 AM
16 points
7 comments5 min readLW link

In­for­ma­tion Ver­sus Action

ScrewtapeFeb 4, 2025, 5:13 AM
27 points
0 comments6 min readLW link

Utili­tar­ian AI Align­ment: Build­ing a Mo­ral As­sis­tant with the Con­sti­tu­tional AI Method

Clément LFeb 4, 2025, 4:15 AM
6 points
1 comment13 min readLW link

Tear Down the Burren

jefftkFeb 4, 2025, 3:40 AM
45 points
2 comments2 min readLW link
(www.jefftk.com)

Con­sti­tu­tional Clas­sifiers: Defend­ing against uni­ver­sal jailbreaks (An­thropic Blog)

ArchimedesFeb 4, 2025, 2:55 AM
16 points
1 comment1 min readLW link
(www.anthropic.com)

Can some­one, any­one, make su­per­in­tel­li­gence a more con­crete con­cept?

Ori NagelFeb 4, 2025, 2:18 AM
2 points
8 comments5 min readLW link

What are the “no free lunch” the­o­rems?

Feb 4, 2025, 2:02 AM
19 points
4 comments1 min readLW link
(aisafety.info)

elimi­nat­ing bias through lan­guage?

KvmanThinkingFeb 4, 2025, 1:52 AM
1 point
12 comments1 min readLW link

New Fore­sight Longevity Bio & Molec­u­lar Nano Grants Program

Allison DuettmannFeb 4, 2025, 12:28 AM
11 points
0 comments1 min readLW link

Meta: Fron­tier AI Framework

Zach Stein-PerlmanFeb 3, 2025, 10:00 PM
33 points
2 comments1 min readLW link
(ai.meta.com)

$300 Fermi Model Competition

ozziegooenFeb 3, 2025, 7:47 PM
16 points
18 commentsLW link

Vi­su­al­iz­ing Interpretability

Darold DavisFeb 3, 2025, 7:36 PM
2 points
0 comments4 min readLW link

Align­ment Can Re­duce Perfor­mance on Sim­ple Eth­i­cal Questions

Daan HenselmansFeb 3, 2025, 7:35 PM
16 points
7 comments6 min readLW link

The Over­lap Paradigm: Re­think­ing Data’s Role in Weak-to-Strong Gen­er­al­iza­tion (W2SG)

Serhii ZamriiFeb 3, 2025, 7:31 PM
2 points
0 comments11 min readLW link

Sleeper agents ap­pear re­silient to ac­ti­va­tion steering

Lucy WingardFeb 3, 2025, 7:31 PM
6 points
0 comments7 min readLW link

Part 1: En­hanc­ing In­ner Align­ment in CLIP Vi­sion Trans­form­ers: Miti­gat­ing Reifi­ca­tion Bias with SAEs and Grad ECLIP

Gilber A. CorralesFeb 3, 2025, 7:30 PM
1 point
0 comments13 min readLW link

Su­per­in­tel­li­gence Align­ment Proposal

Davey MorseFeb 3, 2025, 6:47 PM
5 points
3 comments9 min readLW link

Get­tier Cases [re­post]

AntigoneFeb 3, 2025, 6:12 PM
−4 points
5 comments2 min readLW link

The Self-Refer­ence Trap in Mathematics

Alister MundayFeb 3, 2025, 4:12 PM
−41 points
23 comments2 min readLW link

Stop­ping un­al­igned LLMs is easy!

Yair HalberstadtFeb 3, 2025, 3:38 PM
−3 points
11 comments2 min readLW link

The Outer Levels

JerdleFeb 3, 2025, 2:30 PM
2 points
3 comments6 min readLW link

o3-mini Early Days

ZviFeb 3, 2025, 2:20 PM
45 points
0 comments15 min readLW link
(thezvi.wordpress.com)

OpenAI re­leases deep re­search agent

Seth HerdFeb 3, 2025, 12:48 PM
78 points
21 comments3 min readLW link
(openai.com)

Neu­ron Ac­ti­va­tions to CLIP Embed­dings: Geom­e­try of Lin­ear Com­bi­na­tions in La­tent Space

Roman MalovFeb 3, 2025, 10:30 AM
4 points
0 comments2 min readLW link

[Question] Can we in­fer the search space of a lo­cal op­ti­miser?

Lucius BushnaqFeb 3, 2025, 10:17 AM
25 points
5 comments3 min readLW link

Pick two: con­cise, com­pre­hen­sive, or clear rules

ScrewtapeFeb 3, 2025, 6:39 AM
78 points
27 comments8 min readLW link

Lan­guage Models and World Models, a Philosophy

kyjohnsoFeb 3, 2025, 2:55 AM
1 point
0 comments1 min readLW link
(hylaeansea.org)

Keep­ing Cap­i­tal is the Challenge

LTMFeb 3, 2025, 2:04 AM
13 points
2 comments17 min readLW link
(routecause.substack.com)

Use com­put­ers as pow­er­ful as in 1985 or AI con­trols hu­mans or ?

jrincaycFeb 3, 2025, 12:51 AM
3 points
0 comments2 min readLW link

Some Th­e­ses on Mo­ti­va­tional and Direc­tional Feedback

abstractapplicFeb 2, 2025, 10:50 PM
9 points
3 comments4 min readLW link

Hu­man­ity Has A Pos­si­ble 99.98% Chance Of Ex­tinc­tion

st3rlxxFeb 2, 2025, 9:46 PM
−12 points
1 comment5 min readLW link

Ex­plor­ing how Othel­loGPT com­putes its world model

JMaarFeb 2, 2025, 9:29 PM
7 points
0 comments8 min readLW link

An In­tro­duc­tion to Ev­i­den­tial De­ci­sion Theory

BabićFeb 2, 2025, 9:27 PM
5 points
2 comments10 min readLW link

“DL train­ing == hu­man learn­ing” is a bad analogy

kmanFeb 2, 2025, 8:59 PM
3 points
0 comments1 min readLW link

Con­di­tional Im­por­tance in Toy Models of Superposition

james__pFeb 2, 2025, 8:35 PM
9 points
4 comments10 min readLW link

Trac­ing Ty­pos in LLMs: My At­tempt at Un­der­stand­ing How Models Cor­rect Misspellings

Ivan DostalFeb 2, 2025, 7:56 PM
3 points
1 comment5 min readLW link

The Sim­plest Good

Jesse HooglandFeb 2, 2025, 7:51 PM
75 points
6 comments5 min readLW link

Grad­ual Disem­pow­er­ment, Shell Games and Flinches

Jan_KulveitFeb 2, 2025, 2:47 PM
129 points
36 comments6 min readLW link

Thoughts on Toy Models of Superposition

james__pFeb 2, 2025, 1:52 PM
5 points
2 comments9 min readLW link

Es­cape from Alder­aan I

lsusrFeb 2, 2025, 10:48 AM
58 points
2 comments6 min readLW link