AlignedCut: Vi­sual Con­cepts Dis­cov­ery on Brain-Guided Univer­sal Fea­ture Space

Bogdan Ionut CirsteaSep 14, 2024, 11:23 PM
17 points
1 comment1 min readLW link
(arxiv.org)

How you can help pass im­por­tant AI leg­is­la­tion with 10 min­utes of effort

ThomasWSep 14, 2024, 10:10 PM
59 points
2 comments2 min readLW link

[Question] Cal­ibra­tion train­ing for ‘per­centile rank­ings’?

david reinsteinSep 14, 2024, 9:51 PM
3 points
0 comments2 min readLW link

OpenAI o1, Llama 4, and AlphaZero of LLMs

Vladimir_NesovSep 14, 2024, 9:27 PM
83 points
25 comments1 min readLW link

For­ever Leaders

Justice HowardSep 14, 2024, 8:55 PM
6 points
9 comments1 min readLW link

Emer­gent Author­ship: Creativity à la Communing

gswonkSep 14, 2024, 7:02 PM
1 point
0 comments3 min readLW link

Com­pres­sion Moves for Prediction

adamShimiSep 14, 2024, 5:51 PM
20 points
0 comments7 min readLW link
(epistemologicalfascinations.substack.com)

Pay-on-re­sults per­sonal growth: first success

ChipmonkSep 14, 2024, 3:39 AM
63 points
8 comments4 min readLW link
(chrislakin.blog)

Avoid­ing the Bog of Mo­ral Hazard for AI

Nathan Helm-BurgerSep 13, 2024, 9:24 PM
19 points
13 comments2 min readLW link

[Question] If I ask an LLM to think step by step, how big are the steps?

ryan_bSep 13, 2024, 8:30 PM
7 points
1 comment1 min readLW link

Es­ti­mat­ing Tail Risk in Neu­ral Networks

Mark XuSep 13, 2024, 8:00 PM
68 points
9 comments23 min readLW link
(www.alignment.org)

If-Then Com­mit­ments for AI Risk Re­duc­tion [by Holden Karnofsky]

habrykaSep 13, 2024, 7:38 PM
28 points
0 comments20 min readLW link
(carnegieendowment.org)

Can star­tups be im­pact­ful in AI safety?

Sep 13, 2024, 7:00 PM
15 points
0 comments6 min readLW link

I just can’t agree with AI safety. Why am I wrong?

Ya PolkovnikSep 13, 2024, 5:48 PM
0 points
5 comments2 min readLW link

Keep­ing it (less than) real: Against ℶ₂ pos­si­ble peo­ple or worlds

quiet_NaNSep 13, 2024, 5:29 PM
17 points
3 comments9 min readLW link

Why I’m bear­ish on mechanis­tic in­ter­pretabil­ity: the shards are not in the network

tailcalledSep 13, 2024, 5:09 PM
22 points
40 comments1 min readLW link

In­creas­ing the Span of the Set of Ideas

Jeffrey HeningerSep 13, 2024, 3:52 PM
6 points
1 comment9 min readLW link

How difficult is AI Align­ment?

Sammy MartinSep 13, 2024, 3:47 PM
44 points
6 comments23 min readLW link

The Great Data In­te­gra­tion Schlep

sarahconstantinSep 13, 2024, 3:40 PM
275 points
19 comments9 min readLW link
(sarahconstantin.substack.com)

“Real AGI”

Seth HerdSep 13, 2024, 2:13 PM
20 points
20 comments3 min readLW link

AI, cen­tral­iza­tion, and the One Ring

owencbSep 13, 2024, 2:00 PM
80 points
12 comments8 min readLW link
(strangecities.substack.com)

Ev­i­dence against Learned Search in a Chess-Play­ing Neu­ral Network

p.b.Sep 13, 2024, 11:59 AM
57 points
3 comments6 min readLW link

My ca­reer ex­plo­ra­tion: Tools for build­ing confidence

lynettebyeSep 13, 2024, 11:37 AM
20 points
0 comments20 min readLW link

Con­tra pa­pers claiming su­per­hu­man AI forecasting

Sep 12, 2024, 6:10 PM
182 points
16 comments7 min readLW link

OpenAI o1

Zach Stein-PerlmanSep 12, 2024, 5:30 PM
147 points
41 comments1 min readLW link

How to Give in to Threats (with­out in­cen­tiviz­ing them)

Mikhail SaminSep 12, 2024, 3:55 PM
67 points
31 comments5 min readLW link

Open Prob­lems in AIXI Agent Foundations

Cole WyethSep 12, 2024, 3:38 PM
42 points
2 comments10 min readLW link

On the de­struc­tion of Amer­ica’s best high school

Chris_LeongSep 12, 2024, 3:30 PM
−6 points
7 comments1 min readLW link
(scottaaronson.blog)

Op­ti­mis­ing un­der ar­bi­trar­ily many con­straint equations

dkl9Sep 12, 2024, 2:59 PM
6 points
0 comments3 min readLW link
(dkl9.net)

AI #81: Alpha Proteo

ZviSep 12, 2024, 1:00 PM
59 points
3 comments35 min readLW link
(thezvi.wordpress.com)

[Question] When can I be nu­mer­ate?

FinalFormal2Sep 12, 2024, 4:05 AM
25 points
4 comments1 min readLW link

A Non­con­struc­tive Ex­is­tence Proof of Aligned Superintelligence

RokoSep 12, 2024, 3:20 AM
0 points
80 comments1 min readLW link
(transhumanaxiology.substack.com)

Col­laps­ing the Belief/​Knowl­edge Distinction

JeremiasSep 11, 2024, 9:24 PM
−7 points
8 comments1 min readLW link

Pro­gram­ming Re­fusal with Con­di­tional Ac­ti­va­tion Steering

Bruce W. LeeSep 11, 2024, 8:57 PM
41 points
0 comments11 min readLW link
(brucewlee.com)

Check­ing pub­lic figures on whether they “an­swered the ques­tion” quick anal­y­sis from Har­ris/​Trump de­bate, and a proposal

david reinsteinSep 11, 2024, 8:25 PM
7 points
4 comments1 min readLW link
(open.substack.com)

AI Safety Newslet­ter #41: The Next Gen­er­a­tion of Com­pute Scale Plus, Rank­ing Models by Sus­cep­ti­bil­ity to Jailbreak­ing, and Ma­chine Ethics

Sep 11, 2024, 7:14 PM
5 points
1 comment5 min readLW link
(newsletter.safe.ai)

Re­fac­tor­ing cry­on­ics as struc­tural brain preservation

Andy_McKenzieSep 11, 2024, 6:36 PM
101 points
14 comments3 min readLW link

[Question] Is this a Pivotal Weak Act? Creat­ing bac­te­ria that de­com­pose metal

doomyeserSep 11, 2024, 6:07 PM
9 points
9 comments3 min readLW link

How to dis­cover the na­ture of sen­tience, and ethics

Gustavo RamiresSep 11, 2024, 5:22 PM
−2 points
5 comments5 min readLW link

Seek­ing Mechanism De­signer for Re­search into In­ter­nal­iz­ing Catas­trophic Externalities

c.troutSep 11, 2024, 3:09 PM
24 points
2 comments3 min readLW link

Could Things Be Very Differ­ent?—How His­tor­i­cal In­er­tia Might Blind Us To Op­ti­mal Solutions

James Stephen BrownSep 11, 2024, 9:53 AM
5 points
0 comments8 min readLW link
(nonzerosum.games)

Re­for­ma­tive Hypocrisy, and Pay­ing Close Enough At­ten­tion to Selec­tively Re­ward It.

Andrew_CritchSep 11, 2024, 4:41 AM
53 points
11 comments3 min readLW link

A nec­es­sary Mem­brane for­mal­ism feature

ThomasCederborg10 Sep 2024 21:33 UTC
20 points
6 comments11 min readLW link

For­mal­iz­ing the In­for­mal (event in­vite)

abramdemski10 Sep 2024 19:22 UTC
42 points
0 comments1 min readLW link

AI #80: Never Have I Ever

Zvi10 Sep 2024 17:50 UTC
46 points
20 comments39 min readLW link
(thezvi.wordpress.com)

The Best Lay Ar­gu­ment is not a Sim­ple English Yud Essay

J Bostock10 Sep 2024 17:34 UTC
253 points
15 comments5 min readLW link

Eco­nomics Roundup #3

Zvi10 Sep 2024 13:50 UTC
44 points
9 comments20 min readLW link
(thezvi.wordpress.com)

Am­plify is hiring! Work with us to sup­port field-build­ing ini­ti­a­tives through digi­tal marketing

gergogaspar10 Sep 2024 8:56 UTC
0 points
1 comment4 min readLW link

What boot­straps in­tel­li­gence?

invertedpassion10 Sep 2024 7:11 UTC
2 points
2 comments1 min readLW link

Phys­i­cal Ther­apy Sucks (but have you tried hid­ing it in some peanut but­ter?)

Declan Molony10 Sep 2024 5:54 UTC
16 points
12 comments1 min readLW link