[Question] How could AIs ‘see’ each other’s source code?

KennyJun 2, 2023, 10:41 PM
29 points
45 comments1 min readLW link

Pro­posal: labs should pre­com­mit to paus­ing if an AI ar­gues for it­self to be improved

NickGabsJun 2, 2023, 10:31 PM
3 points
3 comments4 min readLW link

In­fer­ence from a Math­e­mat­i­cal De­scrip­tion of an Ex­ist­ing Align­ment Re­search: a pro­posal for an outer al­ign­ment re­search program

Christopher KingJun 2, 2023, 9:54 PM
7 points
4 comments16 min readLW link

Thoughts on Danc­ing the Whole Dance: Po­si­tional Cal­ling for Contra

jefftkJun 2, 2023, 8:50 PM
10 points
0 comments5 min readLW link
(www.jefftk.com)

Ad­vice for En­ter­ing AI Safety Research

scasperJun 2, 2023, 8:46 PM
26 points
2 comments5 min readLW link

AI should be used to find bet­ter morality

JorterderJun 2, 2023, 8:38 PM
−21 points
1 comment1 min readLW link

A mind needn’t be cu­ri­ous to reap the benefits of curiosity

So8resJun 2, 2023, 6:00 PM
78 points
14 comments1 min readLW link

[Question] Are com­pu­ta­tion­ally com­plex al­gorithms ex­pen­sive to have, ex­pen­sive to op­er­ate, or both?

Noosphere89Jun 2, 2023, 5:50 PM
7 points
5 comments1 min readLW link

[Repli­ca­tion] Con­jec­ture’s Sparse Cod­ing in Toy Models

Jun 2, 2023, 5:34 PM
24 points
0 comments1 min readLW link

Limits to Learn­ing: Re­think­ing AGI’s Path to Dominance

tangerineJun 2, 2023, 4:43 PM
10 points
4 comments15 min readLW link

The Con­trol Prob­lem: Un­solved or Un­solv­able?

RemmeltJun 2, 2023, 3:42 PM
55 points
46 comments14 min readLW link

Hal­lu­ci­nat­ing Suction

Johannes C. MayerJun 2, 2023, 2:16 PM
6 points
0 comments2 min readLW link

Win­ning doesn’t need to flow through in­creases in rationality

MichelJun 2, 2023, 12:05 PM
11 points
5 comments1 min readLW link

Product Recom­men­da­tion: LessWrong di­alogues with Recast

Bart BussmannJun 2, 2023, 8:05 AM
5 points
0 comments1 min readLW link

Think care­fully be­fore call­ing RL poli­cies “agents”

TurnTroutJun 2, 2023, 3:46 AM
134 points
38 comments4 min readLW link1 review

Dreams of “Matho­pe­dia”

Nicholas / Heather KrossJun 2, 2023, 1:30 AM
40 points
16 comments2 min readLW link
(www.thinkingmuchbetter.com)

Outreach suc­cess: In­tro to AI risk that has been successful

Michael TontchevJun 1, 2023, 11:12 PM
83 points
8 comments74 min readLW link
(medium.com)

Open Source LLMs Can Now Ac­tively Lie

Josh LevyJun 1, 2023, 10:03 PM
6 points
0 comments3 min readLW link

Safe AI and moral AI

William D'AlessandroJun 1, 2023, 9:36 PM
−3 points
0 comments10 min readLW link

AI #14: A Very Good Sentence

ZviJun 1, 2023, 9:30 PM
118 points
30 comments65 min readLW link
(thezvi.wordpress.com)

Four lev­els of un­der­stand­ing de­ci­sion theory

Max HJun 1, 2023, 8:55 PM
12 points
11 comments4 min readLW link

Things I Learned by Spend­ing Five Thou­sand Hours In Non-EA Charities

jennJun 1, 2023, 8:48 PM
430 points
35 comments8 min readLW link1 review
(jenn.site)

self-im­prove­ment-ex­ecu­tors are not goal-maximizers

bhauthJun 1, 2023, 8:46 PM
14 points
0 comments1 min readLW link

Ex­per­i­men­tal Fat Loss

johnlawrenceaspdenJun 1, 2023, 8:26 PM
23 points
5 comments1 min readLW link

Yud­kowsky vs Han­son on FOOM: Whose Pre­dic­tions Were Bet­ter?

1a3ornJun 1, 2023, 7:36 PM
137 points
76 comments24 min readLW link2 reviews

Progress links and tweets, 2023-06-01

jasoncrawfordJun 1, 2023, 7:03 PM
10 points
3 comments1 min readLW link
(rootsofprogress.org)

[Question] When does an AI be­come in­tel­li­gent enough to be­come self-aware and power-seek­ing?

FinalFormal2Jun 1, 2023, 6:09 PM
1 point
1 comment1 min readLW link

Uncer­tainty about the fu­ture does not im­ply that AGI will go well

Lauro LangoscoJun 1, 2023, 5:38 PM
62 points
11 comments7 min readLW link

[Question] What are the ar­gu­ments for/​against FOOM?

FinalFormal2Jun 1, 2023, 5:23 PM
8 points
0 comments1 min readLW link

Change my mind: Ve­ganism en­tails trade-offs, and health is one of the axes

ElizabethJun 1, 2023, 5:10 PM
160 points
85 comments19 min readLW link2 reviews
(acesounderglass.com)

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher KingJun 1, 2023, 5:01 PM
−9 points
1 comment3 min readLW link

Don’t waste your time med­i­tat­ing on med­i­ta­tion re­treats!

EternallyBlissfulJun 1, 2023, 4:56 PM
23 points
7 comments11 min readLW link

[Re­quest]: Use “Epi­lo­gen­ics” in­stead of “Eu­gen­ics” in most circumstances

GeneSmithJun 1, 2023, 3:36 PM
56 points
49 comments1 min readLW link

Book Club: Thomas Schel­ling’s “The Strat­egy of Con­flict”

Optimization ProcessJun 1, 2023, 3:29 PM
6 points
1 comment1 min readLW link

Prob­a­bly tell your friends when they make big mistakes

Chi NguyenJun 1, 2023, 2:30 PM
15 points
1 commentLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush PourJun 1, 2023, 1:38 PM
17 points
0 comments5 min readLW link
(www.soroushjp.com)

Work dumber not smarter

lemonhopeJun 1, 2023, 12:40 PM
101 points
17 comments3 min readLW link

Short Re­mark on the (sub­jec­tive) math­e­mat­i­cal ‘nat­u­ral­ness’ of the Nanda—Lie­berum ad­di­tion mod­ulo 113 algorithm

carboniferous_umbraculum Jun 1, 2023, 11:31 AM
104 points
12 comments2 min readLW link

How will they feed us

meijer1973Jun 1, 2023, 8:49 AM
4 points
3 comments5 min readLW link

“LLMs Don’t Have a Co­her­ent Model of the World”—What it Means, Why it Mat­ters

DavidmanheimJun 1, 2023, 7:46 AM
32 points
2 comments7 min readLW link

Gen­eral in­tel­li­gence: what is it, what makes it hard, and will we have it soon?

homeopathicsyzygyJun 1, 2023, 6:46 AM
2 points
0 comments21 min readLW link

Max­i­mal Sen­tience: A Sen­tience Spec­trum and Test Foundation

SnowyiuJun 1, 2023, 6:45 AM
1 point
2 comments4 min readLW link

Re: The Crux List

Logan ZoellnerJun 1, 2023, 4:48 AM
11 points
0 comments2 min readLW link

An ex­pla­na­tion of de­ci­sion theories

metachiralityJun 1, 2023, 3:42 AM
20 points
4 comments5 min readLW link

Danc­ing to Po­si­tional Calling

jefftkJun 1, 2023, 2:40 AM
11 points
2 comments2 min readLW link
(www.jefftk.com)

In­trin­sic vs. Ex­trin­sic Alignment

Alfonso Pérez EscuderoJun 1, 2023, 1:06 AM
1 point
1 comment3 min readLW link

Limit­ing fac­tors to pre­dict AI take-off speed

Alfonso Pérez EscuderoMay 31, 2023, 11:19 PM
1 point
0 comments6 min readLW link

Un­pre­dictabil­ity and the In­creas­ing Difficulty of AI Align­ment for In­creas­ingly In­tel­li­gent AI

Max_He-HoMay 31, 2023, 10:25 PM
5 points
2 comments20 min readLW link

Shut­down-Seek­ing AI

Simon GoldsteinMay 31, 2023, 10:19 PM
50 points
32 comments15 min readLW link

Full Au­toma­tion is Un­likely and Un­nec­es­sary for Ex­plo­sive Growth

aogMay 31, 2023, 9:55 PM
28 points
3 comments5 min readLW link