On value in hu­mans, other an­i­mals, and AI

Michele CampoloJan 31, 2023, 11:33 PM
3 points
17 comments5 min readLW link

Crit­i­cism of the main frame­work in AI alignment

Michele CampoloJan 31, 2023, 11:01 PM
19 points
2 comments6 min readLW link

Nice Clothes are Good, Actually

Gordon Seidoh WorleyJan 31, 2023, 7:22 PM
72 points
28 comments4 min readLW link

[Linkpost] Hu­man-nar­rated au­dio ver­sion of “Is Power-Seek­ing AI an Ex­is­ten­tial Risk?”

Joe CarlsmithJan 31, 2023, 7:21 PM
12 points
1 comment1 min readLW link

No Really, At­ten­tion is ALL You Need—At­ten­tion can do feed­for­ward networks

Robert_AIZIJan 31, 2023, 6:48 PM
29 points
7 comments6 min readLW link
(aizi.substack.com)

Talk to me about your sum­mer/​ca­reer plans

Orpheus16Jan 31, 2023, 6:29 PM
31 points
3 comments2 min readLW link

Mechanis­tic In­ter­pretabil­ity Quick­start Guide

Neel NandaJan 31, 2023, 4:35 PM
42 points
3 comments6 min readLW link
(www.neelnanda.io)

New Hackathon: Ro­bust­ness to dis­tri­bu­tion changes and ambiguity

Charbel-RaphaëlJan 31, 2023, 12:50 PM
12 points
3 comments1 min readLW link

Squig­gle: Why and how to use it

brookJan 31, 2023, 12:37 PM
3 points
0 commentsLW link

Be­ware of Fake Alternatives

silentbobJan 31, 2023, 10:21 AM
57 points
11 comments4 min readLW link1 review

In­ner Misal­ign­ment in “Si­mu­la­tor” LLMs

Adam ScherlisJan 31, 2023, 8:33 AM
84 points
12 comments4 min readLW link

Why AI ex­perts’ jobs are always decades from be­ing automated

Allen HoskinsJan 31, 2023, 3:01 AM
0 points
1 comment5 min readLW link
(open.substack.com)

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

Jan 31, 2023, 2:06 AM
28 points
0 comments2 min readLW link

EA & LW Fo­rum Weekly Sum­mary (23rd − 29th Jan ’23)

Zoe WilliamsJan 31, 2023, 12:36 AM
12 points
0 commentsLW link

Say­ing things be­cause they sound good

Adam ZernerJan 31, 2023, 12:17 AM
23 points
6 comments2 min readLW link

South Bay Meetup

DavidFriedmanJan 30, 2023, 11:35 PM
2 points
0 comments1 min readLW link

Peter Thiel’s speech at Oxford De­bat­ing Union on tech­nolog­i­cal stag­na­tion, Nu­clear weapons, COVID, En­vi­ron­ment, Align­ment, ‘anti-anti anti-anti-clas­si­cal liber­al­ism’, Bostrom, LW, etc.

M. Y. ZuoJan 30, 2023, 11:31 PM
8 points
33 comments1 min readLW link

Med­i­cal Image Regis­tra­tion: The ob­scure field where Deep Me­saop­ti­miz­ers are already at the top of the bench­marks. (post + co­lab note­book)

HastingsJan 30, 2023, 10:46 PM
35 points
1 comment3 min readLW link

Hu­mans Can Be Man­u­ally Strategic

ScrewtapeJan 30, 2023, 10:35 PM
13 points
0 comments3 min readLW link

Why I hate the “ac­ci­dent vs. mi­suse” AI x-risk di­chotomy (quick thoughts on “struc­tural risk”)

David Scott Krueger (formerly: capybaralet)Jan 30, 2023, 6:50 PM
34 points
41 comments2 min readLW link

2022 Unoffi­cial LessWrong Gen­eral Cen­sus

ScrewtapeJan 30, 2023, 6:36 PM
97 points
33 comments2 min readLW link

Call for sub­mis­sions: “(In)hu­man Values and Ar­tifi­cial Agency”, ALIFE 2023

the gears to ascensionJan 30, 2023, 5:37 PM
29 points
4 comments1 min readLW link
(humanvaluesandartificialagency.com)

What I mean by “al­ign­ment is in large part about mak­ing cog­ni­tion aimable at all”

So8resJan 30, 2023, 3:22 PM
171 points
25 comments2 min readLW link

The En­ergy Re­quire­ments and Fea­si­bil­ity of Off-World Mining

clansJan 30, 2023, 3:07 PM
31 points
1 comment8 min readLW link
(locationtbd.home.blog)

What­ever their ar­gu­ments, Covid vac­cine scep­tics will prob­a­bly never con­vince me

contrarianbritJan 30, 2023, 1:42 PM
8 points
10 comments3 min readLW link
(thomasprosser.substack.com)

Si­mu­lacra Levels Summary

ZviJan 30, 2023, 1:40 PM
77 points
14 comments7 min readLW link
(thezvi.wordpress.com)

A Few Prin­ci­ples of Suc­cess­ful AI Design

VestoziaJan 30, 2023, 10:42 AM
1 point
0 comments8 min readLW link

Against Boltz­mann mesaoptimizers

porbyJan 30, 2023, 2:55 AM
77 points
6 comments4 min readLW link

How Likely is Los­ing a Google Ac­count?

jefftkJan 30, 2023, 12:20 AM
52 points
12 comments3 min readLW link
(www.jefftk.com)

Model-driven feed­back could am­plify al­ign­ment failures

aogJan 30, 2023, 12:00 AM
21 points
1 comment2 min readLW link

Take­aways from cal­ibra­tion training

Olli JärviniemiJan 29, 2023, 7:09 PM
45 points
2 comments3 min readLW link1 review

Struc­ture, cre­ativity, and nov­elty

TsviBTJan 29, 2023, 2:30 PM
19 points
4 comments7 min readLW link

What is the ground re­al­ity of coun­tries tak­ing steps to re­cal­ibrate AI de­vel­op­ment to­wards Align­ment first?

NebuchJan 29, 2023, 1:26 PM
8 points
6 comments3 min readLW link

Com­pendium of prob­lems with RLHF

Charbel-RaphaëlJan 29, 2023, 11:40 AM
120 points
16 comments10 min readLW link

My biggest take­away from Red­wood Re­search REMIX

Alok SinghJan 29, 2023, 11:00 AM
0 points
0 comments1 min readLW link
(alok.github.io)

EA novel pub­lished on Amazon

Timothy UnderwoodJan 29, 2023, 8:33 AM
17 points
0 commentsLW link

Re­v­erse RSS Stats

jefftkJan 29, 2023, 3:40 AM
12 points
2 comments1 min readLW link
(www.jefftk.com)

Why and How to Grad­u­ate Early [U.S.]

TegoJan 29, 2023, 1:28 AM
53 points
9 comments8 min readLW link1 review

Stop-gra­di­ents lead to fixed point predictions

Jan 28, 2023, 10:47 PM
37 points
2 comments24 min readLW link

Eli Dourado AMA on the Progress Forum

jasoncrawfordJan 28, 2023, 10:18 PM
19 points
0 comments1 min readLW link
(rootsofprogress.org)

LW Filter Tags (Ra­tion­al­ity/​World Model­ing now pro­moted in Lat­est Posts)

Jan 28, 2023, 10:14 PM
60 points
4 comments3 min readLW link

No Fire in the Equations

Carlos RamirezJan 28, 2023, 9:16 PM
−16 points
4 comments3 min readLW link

Op­ti­mal­ity is the tiger, and an­noy­ing the user is its teeth

Christopher KingJan 28, 2023, 8:20 PM
25 points
6 comments2 min readLW link

On not get­ting con­tam­i­nated by the wrong obe­sity ideas

NatáliaJan 28, 2023, 8:18 PM
306 points
69 comments30 min readLW link

Ad­vice I found helpful in 2022

Orpheus16Jan 28, 2023, 7:48 PM
36 points
5 comments2 min readLW link

The Knock­down Ar­gu­ment Paradox

Bryan FrancesJan 28, 2023, 7:23 PM
−12 points
6 comments8 min readLW link

Less Wrong/​ACX Bu­dapest Feb 4th Meetup

Jan 28, 2023, 2:49 PM
2 points
0 comments1 min readLW link

Reflec­tions on De­cep­tion & Gen­er­al­ity in Scal­able Over­sight (Another OpenAI Align­ment Re­view)

Shoshannah TekofskyJan 28, 2023, 5:26 AM
53 points
7 comments7 min readLW link

A Sim­ple Align­ment Typology

Shoshannah TekofskyJan 28, 2023, 5:26 AM
34 points
2 comments2 min readLW link

Spooky ac­tion at a dis­tance in the loss landscape

Jan 28, 2023, 12:22 AM
61 points
4 comments7 min readLW link
(www.jessehoogland.com)