[Question] What are the ar­gu­ments for/​against FOOM?

FinalFormal2Jun 1, 2023, 5:23 PM
8 points
0 comments1 min readLW link

Change my mind: Ve­ganism en­tails trade-offs, and health is one of the axes

ElizabethJun 1, 2023, 5:10 PM
160 points
85 comments19 min readLW link2 reviews
(acesounderglass.com)

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher KingJun 1, 2023, 5:01 PM
−9 points
1 comment3 min readLW link

Don’t waste your time med­i­tat­ing on med­i­ta­tion re­treats!

EternallyBlissfulJun 1, 2023, 4:56 PM
23 points
7 comments11 min readLW link

[Re­quest]: Use “Epi­lo­gen­ics” in­stead of “Eu­gen­ics” in most circumstances

GeneSmithJun 1, 2023, 3:36 PM
57 points
49 comments1 min readLW link

Book Club: Thomas Schel­ling’s “The Strat­egy of Con­flict”

Optimization ProcessJun 1, 2023, 3:29 PM
6 points
1 comment1 min readLW link

Prob­a­bly tell your friends when they make big mistakes

Chi NguyenJun 1, 2023, 2:30 PM
15 points
1 commentLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush PourJun 1, 2023, 1:38 PM
17 points
0 comments5 min readLW link
(www.soroushjp.com)

Work dumber not smarter

lemonhopeJun 1, 2023, 12:40 PM
101 points
17 comments3 min readLW link

Short Re­mark on the (sub­jec­tive) math­e­mat­i­cal ‘nat­u­ral­ness’ of the Nanda—Lie­berum ad­di­tion mod­ulo 113 algorithm

carboniferous_umbraculum Jun 1, 2023, 11:31 AM
104 points
12 comments2 min readLW link

How will they feed us

meijer1973Jun 1, 2023, 8:49 AM
4 points
3 comments5 min readLW link

“LLMs Don’t Have a Co­her­ent Model of the World”—What it Means, Why it Mat­ters

DavidmanheimJun 1, 2023, 7:46 AM
32 points
2 comments7 min readLW link

Gen­eral in­tel­li­gence: what is it, what makes it hard, and will we have it soon?

homeopathicsyzygyJun 1, 2023, 6:46 AM
2 points
0 comments21 min readLW link

Max­i­mal Sen­tience: A Sen­tience Spec­trum and Test Foundation

SnowyiuJun 1, 2023, 6:45 AM
1 point
2 comments4 min readLW link

Re: The Crux List

Logan ZoellnerJun 1, 2023, 4:48 AM
11 points
0 comments2 min readLW link

An ex­pla­na­tion of de­ci­sion theories

metachiralityJun 1, 2023, 3:42 AM
20 points
4 comments5 min readLW link

Danc­ing to Po­si­tional Calling

jefftkJun 1, 2023, 2:40 AM
11 points
2 comments2 min readLW link
(www.jefftk.com)

In­trin­sic vs. Ex­trin­sic Alignment

Alfonso Pérez EscuderoJun 1, 2023, 1:06 AM
1 point
1 comment3 min readLW link

Limit­ing fac­tors to pre­dict AI take-off speed

Alfonso Pérez EscuderoMay 31, 2023, 11:19 PM
1 point
0 comments6 min readLW link

Un­pre­dictabil­ity and the In­creas­ing Difficulty of AI Align­ment for In­creas­ingly In­tel­li­gent AI

Max_He-HoMay 31, 2023, 10:25 PM
5 points
2 comments20 min readLW link

Shut­down-Seek­ing AI

Simon GoldsteinMay 31, 2023, 10:19 PM
50 points
32 comments15 min readLW link

Full Au­toma­tion is Un­likely and Un­nec­es­sary for Ex­plo­sive Growth

aogMay 31, 2023, 9:55 PM
28 points
3 comments5 min readLW link

LessWrong Com­mu­nity Week­end 2023 Up­dates: Keynote Speaker Mal­colm Ocean, Re­main­ing Tick­ets and More

Henry ProwbellMay 31, 2023, 9:53 PM
23 points
0 comments2 min readLW link

The Div­ine Move Para­dox & Think­ing as a Species

Christopher James HartMay 31, 2023, 9:38 PM
9 points
8 comments3 min readLW link

In­tent-al­igned AI sys­tems de­plete hu­man agency: the need for agency foun­da­tions re­search in AI safety

catubcMay 31, 2023, 9:18 PM
26 points
4 comments11 min readLW link

[Question] How much over­lap is there be­tween the util­ity func­tion of GPT-n and GPT-(n+1), as­sum­ing both are near AGI?

PhosphorousMay 31, 2023, 8:28 PM
2 points
0 comments2 min readLW link

My AI-risk cartoon

preMay 31, 2023, 7:46 PM
6 points
0 comments1 min readLW link

Eval­u­a­tion Ev­i­dence Re­con­struc­tions of Mock Crimes Sub­mis­sion 3

Alan E DunneMay 31, 2023, 7:03 PM
−1 points
0 comments3 min readLW link

Im­prov­ing Math­e­mat­i­cal Rea­son­ing with-Pro­cess Supervision

p.b.May 31, 2023, 7:00 PM
14 points
3 comments1 min readLW link
(openai.com)

The Crux List

ZviMay 31, 2023, 6:30 PM
72 points
19 comments33 min readLW link
(thezvi.wordpress.com)

Stages of Survival

ZviMay 31, 2023, 6:30 PM
44 points
0 comments17 min readLW link
(thezvi.wordpress.com)

Types and De­grees of Alignment

ZviMay 31, 2023, 6:30 PM
36 points
10 comments8 min readLW link
(thezvi.wordpress.com)

To Pre­dict What Hap­pens, Ask What Happens

ZviMay 31, 2023, 6:30 PM
81 points
0 comments9 min readLW link
(thezvi.wordpress.com)

A push to­wards in­ter­ac­tive trans­former decoding

R0bkMay 31, 2023, 5:56 PM
3 points
0 comments2 min readLW link
(github.com)

Neu­roevolu­tion, So­cial In­tel­li­gence, and Logic

vinnik.dmitry07May 31, 2023, 5:54 PM
1 point
0 comments10 min readLW link

Con­trast Pairs Drive the Em­piri­cal Perfor­mance of Con­trast Con­sis­tent Search (CCS)

Scott EmmonsMay 31, 2023, 5:09 PM
97 points
1 comment6 min readLW link1 review

Cos­mopoli­tan val­ues don’t come free

So8resMay 31, 2023, 3:58 PM
137 points
85 comments1 min readLW link

[Question] Ar­gu­ments Against Fos­sil Fu­ture?

SableMay 31, 2023, 1:41 PM
13 points
29 comments1 min readLW link

On Ob­jec­tive Ethics, and a bit about boats

EndlessBlueMay 31, 2023, 11:40 AM
−7 points
3 comments2 min readLW link

Against Con­flat­ing Ex­per­tise: Dist­in­guish­ing AI Devel­op­ment from AI Im­pli­ca­tion Analysis

RatiosMay 31, 2023, 9:50 AM
13 points
4 comments1 min readLW link

A rough model for P(AI doom)

Michael TontchevMay 31, 2023, 8:58 AM
0 points
1 comment2 min readLW link

[Question] What’s the con­sen­sus on porn?

FinalFormal2May 31, 2023, 3:15 AM
5 points
19 comments1 min readLW link

Product En­dorse­ment: Food for sleep interruptions

ElizabethMay 31, 2023, 1:50 AM
45 points
7 comments1 min readLW link
(acesounderglass.com)

Op­ti­mal Clothing

Gordon Seidoh WorleyMay 31, 2023, 1:00 AM
31 points
8 comments6 min readLW link

Ab­strac­tion is Big­ger than Nat­u­ral Abstraction

Nicholas / Heather KrossMay 31, 2023, 12:00 AM
18 points
0 comments5 min readLW link
(www.thinkingmuchbetter.com)

Hu­mans, chim­panzees and other animals

gjmMay 30, 2023, 11:53 PM
21 points
18 comments1 min readLW link

The case for re­mov­ing al­ign­ment and ML re­search from the train­ing dataset

berenMay 30, 2023, 8:54 PM
48 points
8 comments5 min readLW link

Why Job Dis­place­ment Pre­dic­tions are Wrong: Ex­pla­na­tions of Cog­ni­tive Automation

Moritz WallawitschMay 30, 2023, 8:43 PM
−4 points
0 comments8 min readLW link

PaLM-2 & GPT-4 in “Ex­trap­o­lat­ing GPT-N perfor­mance”

Lukas FinnvedenMay 30, 2023, 6:33 PM
57 points
6 comments6 min readLW link

Why I don’t think that the prob­a­bil­ity that AGI kills ev­ery­one is roughly 1 (but rather around 0.995).

BastumannenMay 30, 2023, 5:54 PM
−6 points
0 comments2 min readLW link