Gra­da­tions of moral weight

MichaelStJulesFeb 29, 2024, 11:08 PM
1 point
0 commentsLW link

Ap­proach­ing Hu­man-Level Fore­cast­ing with Lan­guage Models

Feb 29, 2024, 10:36 PM
60 points
6 comments3 min readLW link

Paper re­view: “The Un­rea­son­able Effec­tive­ness of Easy Train­ing Data for Hard Tasks”

Vassil TashevFeb 29, 2024, 6:44 PM
11 points
0 comments4 min readLW link

What’s in the box?! – Towards in­ter­pretabil­ity by dis­t­in­guish­ing niches of value within neu­ral net­works.

Joshua ClancyFeb 29, 2024, 6:33 PM
3 points
4 comments128 min readLW link

Short Post: Discern­ing Truth from Trash

FinalFormal2Feb 29, 2024, 6:09 PM
−2 points
0 comments1 min readLW link

AI #53: One More Leap

ZviFeb 29, 2024, 4:10 PM
45 points
0 comments38 min readLW link
(thezvi.wordpress.com)

Cry­on­ics p(suc­cess) es­ti­mates are only weakly as­so­ci­ated with in­ter­est in pur­su­ing cry­on­ics in the LW 2023 Survey

Andy_McKenzieFeb 29, 2024, 2:47 PM
28 points
6 comments1 min readLW link

Ben­gio’s Align­ment Pro­posal: “Towards a Cau­tious Scien­tist AI with Con­ver­gent Safety Bounds”

mattmacdermottFeb 29, 2024, 1:59 PM
76 points
19 comments14 min readLW link
(yoshuabengio.org)

Tips for Em­piri­cal Align­ment Research

Ethan PerezFeb 29, 2024, 6:04 AM
163 points
4 comments23 min readLW link

[Question] Sup­pos­ing the 1bit LLM pa­per pans out

O OFeb 29, 2024, 5:31 AM
27 points
11 comments1 min readLW link

Can RLLMv3′s abil­ity to defend against jailbreaks be at­tributed to datasets con­tain­ing sto­ries about Jung’s shadow in­te­gra­tion the­ory?

MiguelDevFeb 29, 2024, 5:13 AM
7 points
2 comments11 min readLW link

Post se­ries on “Li­a­bil­ity Law for re­duc­ing Ex­is­ten­tial Risk from AI”

Nora_AmmannFeb 29, 2024, 4:39 AM
42 points
1 comment1 min readLW link
(forum.effectivealtruism.org)

Tour Ret­ro­spec­tive Fe­bru­ary 2024

jefftkFeb 29, 2024, 3:50 AM
10 points
0 comments4 min readLW link
(www.jefftk.com)

Lo­cat­ing My Eyes (Part 3 of “The Sense of Phys­i­cal Ne­ces­sity”)

LoganStrohlFeb 29, 2024, 3:09 AM
43 points
4 comments22 min readLW link

Con­spir­acy The­o­rists Aren’t Ig­no­rant. They’re Bad At Episte­mol­ogy.

omnizoidFeb 28, 2024, 11:39 PM
18 points
10 comments5 min readLW link

Dis­cov­er­ing al­ign­ment wind­falls re­duces AI risk

Feb 28, 2024, 9:23 PM
15 points
1 comment8 min readLW link
(blog.elicit.com)

my the­ory of the in­dus­trial revolution

bhauthFeb 28, 2024, 9:07 PM
23 points
7 comments3 min readLW link
(www.bhauth.com)

Whole­some­ness and Effec­tive Altruism

owencbFeb 28, 2024, 8:28 PM
42 points
3 commentsLW link

times­tamp­ing through the Singularity

throwaway918119127Feb 28, 2024, 7:09 PM
−2 points
4 comments8 min readLW link

Ev­i­den­tial Co­op­er­a­tion in Large Wor­lds: Po­ten­tial Ob­jec­tions & FAQ

Feb 28, 2024, 6:58 PM
42 points
5 commentsLW link

Ti­maeus’s First Four Months

Feb 28, 2024, 5:01 PM
173 points
6 comments6 min readLW link

Notes on con­trol eval­u­a­tions for safety cases

Feb 28, 2024, 4:15 PM
49 points
0 comments32 min readLW link

Cor­po­rate Gover­nance for Fron­tier AI Labs: A Re­search Agenda

Matthew WeardenFeb 28, 2024, 11:29 AM
4 points
0 comments16 min readLW link
(matthewwearden.co.uk)

How AI Will Change Education

robotelvisFeb 28, 2024, 5:30 AM
6 points
3 comments5 min readLW link
(messyprogress.substack.com)

Band Les­sons?

jefftkFeb 28, 2024, 3:00 AM
13 points
3 comments1 min readLW link
(www.jefftk.com)

New LessWrong re­view win­ner UI (“The Least­Wrong” sec­tion and full-art post pages)

kaveFeb 28, 2024, 2:42 AM
105 points
64 comments1 min readLW link

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

Feb 27, 2024, 11:03 PM
101 points
188 comments14 min readLW link

Which an­i­mals re­al­ize which types of sub­jec­tive welfare?

MichaelStJulesFeb 27, 2024, 7:31 PM
4 points
0 commentsLW link

Biose­cu­rity and AI: Risks and Opportunities

Steve NewmanFeb 27, 2024, 6:45 PM
11 points
1 comment7 min readLW link
(www.safe.ai)

The Gem­ini In­ci­dent Continues

ZviFeb 27, 2024, 4:00 PM
45 points
6 comments48 min readLW link
(thezvi.wordpress.com)

How I in­ter­nal­ized my achieve­ments to bet­ter deal with nega­tive feelings

Raymond KoopmanschapFeb 27, 2024, 3:10 PM
42 points
7 comments6 min readLW link

On Frus­tra­tion and Regret

silentbobFeb 27, 2024, 12:19 PM
8 points
0 comments4 min readLW link

San Fran­cisco ACX Meetup “Third Satur­day”

Feb 27, 2024, 7:07 AM
7 points
0 comments1 min readLW link

Ex­am­in­ing Lan­guage Model Perfor­mance with Re­con­structed Ac­ti­va­tions us­ing Sparse Au­toen­coders

Feb 27, 2024, 2:43 AM
43 points
16 comments15 min readLW link

Pro­ject idea: an iter­ated pris­oner’s dilemma com­pe­ti­tion/​game

Adam ZernerFeb 26, 2024, 11:06 PM
8 points
0 comments5 min readLW link

Act­ing Wholesomely

owencbFeb 26, 2024, 9:49 PM
59 points
64 commentsLW link

Get­ting ra­tio­nal now or later: nav­i­gat­ing pro­cras­ti­na­tion and time-in­con­sis­tent prefer­ences for new ra­tio­nal­ists

milo_thoughtsFeb 26, 2024, 7:38 PM
1 point
0 comments8 min readLW link

[Question] Whom Do You Trust?

JackOfAllTradesFeb 26, 2024, 7:38 PM
1 point
0 comments1 min readLW link

Boundary Vio­la­tions vs Boundary Dissolution

ChipmonkFeb 26, 2024, 6:59 PM
8 points
4 comments1 min readLW link

[Question] Can we get an AI to “do our al­ign­ment home­work for us”?

Chris_LeongFeb 26, 2024, 7:56 AM
53 points
33 comments1 min readLW link

How I build and run be­hav­ioral interviews

benkuhnFeb 26, 2024, 5:50 AM
32 points
6 comments4 min readLW link
(www.benkuhn.net)

Hid­den Cog­ni­tion De­tec­tion Meth­ods and Bench­marks

Paul CologneseFeb 26, 2024, 5:31 AM
22 points
11 comments4 min readLW link

Cel­lu­lar res­pi­ra­tion as a steam engine

dkl9Feb 25, 2024, 8:17 PM
24 points
1 comment1 min readLW link
(dkl9.net)

[Question] Ra­tion­al­ism and Depen­dent Origi­na­tion?

BaometrusFeb 25, 2024, 6:16 PM
2 points
3 comments1 min readLW link

China-AI forecasts

NathanBarnardFeb 25, 2024, 4:49 PM
40 points
29 comments6 min readLW link

Ide­olog­i­cal Bayesians

Kevin DorstFeb 25, 2024, 2:17 PM
96 points
4 comments10 min readLW link
(kevindorst.substack.com)

De­con­fus­ing In-Con­text Learning

Arjun PanicksseryFeb 25, 2024, 9:48 AM
37 points
1 comment2 min readLW link

Everett branches, in­ter-light cone trade and other alien mat­ters: Ap­pendix to “An ECL ex­plainer”

Feb 24, 2024, 11:09 PM
17 points
0 commentsLW link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

Feb 24, 2024, 10:58 PM
55 points
8 commentsLW link

Choos­ing My Quest (Part 2 of “The Sense Of Phys­i­cal Ne­ces­sity”)

LoganStrohlFeb 24, 2024, 9:31 PM
40 points
7 comments12 min readLW link