A method of writ­ing con­tent eas­ily with lit­tle anxiety

jessicataApr 8, 2022, 10:11 PM
64 points
19 comments3 min readLW link
(unstableontology.com)

Good Heart Dona­tion Lot­tery Winner

Gordon Seidoh WorleyApr 8, 2022, 8:34 PM
21 points
0 comments1 min readLW link

Roam Re­search Mo­bile is Out!

Logan RiggsApr 8, 2022, 7:05 PM
12 points
0 comments1 min readLW link

Progress Re­port 4: logit lens redux

Nathan Helm-BurgerApr 8, 2022, 6:35 PM
4 points
0 comments2 min readLW link

[Question] What would the cre­ation of al­igned AGI look like for us?

PerhapsApr 8, 2022, 6:05 PM
3 points
4 comments1 min readLW link

Con­vinc­ing All Ca­pa­bil­ity Researchers

Logan RiggsApr 8, 2022, 5:40 PM
120 points
70 comments3 min readLW link

Lan­guage Model Tools for Align­ment Research

Logan RiggsApr 8, 2022, 5:32 PM
28 points
0 comments2 min readLW link

Take­aways From 3 Years Work­ing In Ma­chine Learning

George3d6Apr 8, 2022, 5:14 PM
35 points
10 comments11 min readLW link
(www.epistem.ink)

[RETRACTED] It’s time for EA lead­er­ship to pull the short-timelines fire alarm.

Not RelevantApr 8, 2022, 4:07 PM
115 points
166 comments4 min readLW link

Boulder ACX Meetup, Sun Apr 24

Josh SacksApr 8, 2022, 3:43 PM
5 points
4 comments1 min readLW link

AIs should learn hu­man prefer­ences, not biases

Stuart_ArmstrongApr 8, 2022, 1:45 PM
10 points
0 comments1 min readLW link

We Are Con­jec­ture, A New Align­ment Re­search Startup

Connor LeahyApr 8, 2022, 11:40 AM
197 points
25 comments4 min readLW link

Differ­ent per­spec­tives on con­cept extrapolation

Stuart_ArmstrongApr 8, 2022, 10:42 AM
48 points
8 comments5 min readLW link1 review

[Question] Is there a pos­si­bil­ity that the up­com­ing scal­ing of data in lan­guage mod­els causes A.G.I.?

ArtMiApr 8, 2022, 6:56 AM
2 points
0 comments1 min readLW link

Good Heart Week Is Over!

Ben PaceApr 8, 2022, 6:43 AM
55 points
35 comments1 min readLW link

The Ra­tion­al­ist-Etcetera Di­as­pora: A SPREADSHEET!!

Amelia BedeliaApr 8, 2022, 5:43 AM
25 points
2 comments1 min readLW link

AI Align­ment and Recognition

Chris_LeongApr 8, 2022, 5:39 AM
7 points
2 comments1 min readLW link

Na­ture’s an­swer to the ex­plore/​ex­ploit problem

lizard_brainApr 8, 2022, 5:13 AM
5 points
1 comment1 min readLW link

Edge cases don’t in­val­i­date the rule

Adam SelkerApr 8, 2022, 4:17 AM
6 points
5 comments2 min readLW link

Re­v­erse (in­tent) al­ign­ment may al­low for safer Oracles

azsantoskApr 8, 2022, 2:48 AM
4 points
0 comments4 min readLW link

Sum­mary: “In­ter­net Search tips” by Gw­ern Branwen

Pablo RepettoApr 8, 2022, 2:02 AM
12 points
2 comments4 min readLW link
(pabloernesto.github.io)

Maxwell Peter­son’s High­lighted Posts

Maxwell PetersonApr 8, 2022, 1:34 AM
5 points
0 comments1 min readLW link

Foot-Chord­ing Chords

jefftkApr 8, 2022, 1:10 AM
8 points
0 comments1 min readLW link
(www.jefftk.com)

Deep­Mind: The Pod­cast—Ex­cerpts on AGI

WilliamKielyApr 7, 2022, 10:09 PM
99 points
12 comments5 min readLW link

Con­vinc­ing Your Brain That Hu­man­ity is Evil is Easy

Johannes C. MayerApr 7, 2022, 9:39 PM
14 points
4 comments2 min readLW link

Play­ing with DALL·E 2

Dave OrrApr 7, 2022, 6:49 PM
166 points
118 comments6 min readLW link

The Ex­plana­tory Gap of AI

David ValdmanApr 7, 2022, 6:28 PM
1 point
0 comments4 min readLW link

Believ­able near-term AI disaster

DagonApr 7, 2022, 6:20 PM
9 points
3 comments2 min readLW link

[Question] List of con­crete hy­po­thet­i­cals for AI takeover?

YitzApr 7, 2022, 4:54 PM
7 points
5 comments1 min readLW link

What if “friendly/​un­friendly” GAI isn’t a thing?

homunqApr 7, 2022, 4:54 PM
−1 points
4 comments2 min readLW link

Pro­duc­tive Mis­takes, Not Perfect Answers

adamShimiApr 7, 2022, 4:41 PM
100 points
11 comments6 min readLW link

Covid 4/​7/​22: Open­ing Day

ZviApr 7, 2022, 4:10 PM
28 points
5 comments5 min readLW link
(thezvi.wordpress.com)

Dun­can Sa­bien On Writing

lynettebyeApr 7, 2022, 4:09 PM
36 points
3 comments16 min readLW link

[ASoT] Some thoughts about im­perfect world modeling

leogaoApr 7, 2022, 3:42 PM
7 points
0 comments4 min readLW link

How BoMAI Might fail

Donald HobsonApr 7, 2022, 3:32 PM
11 points
3 comments2 min readLW link

ACX Mon­treal Meetup Apr 24 2022

EApr 7, 2022, 2:14 PM
5 points
0 comments1 min readLW link

Is GPT3 a Good Ra­tion­al­ist? - In­struc­tGPT3 [2/​2]

simeon_cApr 7, 2022, 1:46 PM
11 points
0 comments7 min readLW link

I dis­cov­ered LessWrong… dur­ing Good Heart Week

identity.keyApr 7, 2022, 1:22 PM
51 points
12 comments3 min readLW link

Re­search agenda—Build­ing a multi-modal chess-lan­guage model

p.b.Apr 7, 2022, 12:25 PM
8 points
2 comments2 min readLW link

Pre­dict­ing a global catas­tro­phe: the Ukrainian model

RomanSApr 7, 2022, 12:06 PM
5 points
11 comments2 min readLW link

Truth­ful­ness, stan­dards and credibility

Joe CollmanApr 7, 2022, 10:31 AM
12 points
2 comments32 min readLW link

How to train your trans­former

p.b.Apr 7, 2022, 9:34 AM
6 points
0 comments8 min readLW link

Find­ing Use­ful Things

Johannes C. MayerApr 7, 2022, 5:57 AM
8 points
0 comments4 min readLW link

Book Re­view: A PhD is Not Enough

ljh2Apr 7, 2022, 5:15 AM
22 points
4 comments23 min readLW link

Set­ting the Brains Difficulty-Anchor

Johannes C. MayerApr 7, 2022, 5:04 AM
2 points
0 comments3 min readLW link

How I Got So Much GHT

Gordon Seidoh WorleyApr 7, 2022, 3:59 AM
14 points
2 comments5 min readLW link

What Should We Op­ti­mize—A Conversation

Johannes C. MayerApr 7, 2022, 3:47 AM
1 point
0 comments14 min readLW link

Con­tra: Avoid­ing Sore Arms

jefftkApr 7, 2022, 1:10 AM
11 points
0 comments2 min readLW link
(www.jefftk.com)

Why Take Care Of Your Health?

MondSemmelApr 6, 2022, 11:11 PM
40 points
21 comments6 min readLW link

[Question] What are ra­tio­nal­ists worst at?

Gordon Seidoh WorleyApr 6, 2022, 11:00 PM
11 points
4 comments1 min readLW link