Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max HMay 29, 2023, 11:42 PM
16 points
2 comments13 min readLW link

Win­ners-take-how-much?

YonatanKMay 29, 2023, 9:56 PM
3 points
2 comments3 min readLW link

Re­ply to a fer­til­ity doc­tor con­cern­ing poly­genic em­bryo screening

GeneSmithMay 29, 2023, 9:50 PM
59 points
6 comments8 min readLW link

Sen­tience matters

So8resMay 29, 2023, 9:25 PM
143 points
96 comments2 min readLW link

Wikipe­dia as an in­tro­duc­tion to the al­ign­ment problem

SoerenMindMay 29, 2023, 6:43 PM
83 points
10 comments1 min readLW link
(en.wikipedia.org)

[Question] What are some of the best in­tro­duc­tions/​break­downs of AI ex­is­ten­tial risk for those un­fa­mil­iar?

Isaac KingMay 29, 2023, 5:04 PM
17 points
2 comments1 min readLW link

Creat­ing Flash­cards with LLMs

Diogo CruzMay 29, 2023, 4:55 PM
15 points
3 comments9 min readLW link

On the Im­pos­si­bil­ity of In­tel­li­gent Paper­clip Maximizers

Michael SimkinMay 29, 2023, 4:55 PM
−21 points
5 comments4 min readLW link

Min­i­mum Vi­able Exterminator

Richard HorvathMay 29, 2023, 4:32 PM
14 points
5 comments5 min readLW link

An LLM-based “ex­em­plary ac­tor”

Roman LeventovMay 29, 2023, 11:12 AM
16 points
0 comments12 min readLW link

Align­ing an H-JEPA agent via train­ing on the out­puts of an LLM-based “ex­em­plary ac­tor”

Roman LeventovMay 29, 2023, 11:08 AM
12 points
10 comments30 min readLW link

Gem­ini will bring the next big timeline update

p.b.May 29, 2023, 6:05 AM
50 points
6 comments1 min readLW link

Pro­posed Align­ment Tech­nique: OSNR (Out­put San­i­ti­za­tion via Nois­ing and Re­con­struc­tion) for Safer Usage of Po­ten­tially Misal­igned AGI

sudoMay 29, 2023, 1:35 AM
14 points
9 comments6 min readLW link

Mo­ral­ity is Ac­ci­den­tal & Self-Congratulatory

ymeskhoutMay 29, 2023, 12:40 AM
26 points
40 comments5 min readLW link

TinyS­to­ries: Small Lan­guage Models That Still Speak Co­her­ent English

Ulisse MiniMay 28, 2023, 10:23 PM
66 points
8 comments2 min readLW link
(arxiv.org)

“Mem­branes” is bet­ter ter­minol­ogy than “bound­aries” alone

May 28, 2023, 10:16 PM
30 points
12 comments3 min readLW link

The king token

p.b.May 28, 2023, 7:18 PM
17 points
0 comments4 min readLW link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

May 28, 2023, 7:10 PM
39 points
14 comments26 min readLW link

Devil’s Ad­vo­cate: Ad­verse Selec­tion Against Con­scien­tious­ness

lionhearted (Sebastian Marshall)May 28, 2023, 5:53 PM
10 points
2 comments1 min readLW link

Re­acts now en­abled on 100% of posts, though still just ex­per­i­ment­ing

RubyMay 28, 2023, 5:36 AM
88 points
73 comments2 min readLW link

My AI Align­ment Re­search Agenda and Threat Model, right now (May 2023)

Nicholas / Heather KrossMay 28, 2023, 3:23 AM
25 points
0 comments6 min readLW link
(www.thinkingmuchbetter.com)

Kelly bet­ting vs ex­pec­ta­tion max­i­miza­tion

MorgneticFieldMay 28, 2023, 1:54 AM
35 points
33 comments5 min readLW link

Why and When In­ter­pretabil­ity Work is Dangerous

Nicholas / Heather KrossMay 28, 2023, 12:27 AM
20 points
9 comments8 min readLW link
(www.thinkingmuchbetter.com)

Twin Cities ACX Meetup—June 2023

Timothy M.May 27, 2023, 8:11 PM
1 point
1 comment1 min readLW link

Pro­ject Idea: Challenge Groups for Align­ment Researchers

Adam ZernerMay 27, 2023, 8:10 PM
13 points
0 comments1 min readLW link

In­tro­spec­tive Bayes

False NameMay 27, 2023, 7:35 PM
−3 points
2 comments16 min readLW link

Should Ra­tional An­i­ma­tions in­vite view­ers to read con­tent on LessWrong?

WriterMay 27, 2023, 7:26 PM
40 points
9 comments3 min readLW link

Who are the Ex­perts on Cry­on­ics?

Mati_RoyMay 27, 2023, 7:24 PM
30 points
9 comments1 min readLW link
(biostasis.substack.com)

AI and Planet Earth are in­com­pat­i­ble.

archeonMay 27, 2023, 6:59 PM
−4 points
2 comments1 min readLW link

South Bay ACX/​LW Meetup

ISMay 27, 2023, 5:25 PM
2 points
0 comments1 min readLW link

Hands-On Ex­pe­rience Is Not Magic

Thane RuthenisMay 27, 2023, 4:57 PM
22 points
14 comments5 min readLW link

Is Deon­tolog­i­cal AI Safe? [Feed­back Draft]

May 27, 2023, 4:39 PM
19 points
15 comments20 min readLW link

San Fran­cisco ACX Meetup “First Satur­day” June 3, 1 pm

guenaelMay 27, 2023, 1:58 PM
1 point
0 comments1 min readLW link

Papers on pro­tein design

alexlyzhovMay 27, 2023, 1:18 AM
9 points
0 comments3 min readLW link

D&D.Sci 5E: Re­turn of the League of Defenders

aphyerMay 26, 2023, 8:39 PM
42 points
11 comments3 min readLW link

Seek­ing (Paid) Case Stud­ies on Standards

HoldenKarnofskyMay 26, 2023, 5:58 PM
69 points
9 comments11 min readLW link

Con­di­tional Pre­dic­tion with Zero-Sum Train­ing Solves Self-Fulfilling Prophecies

May 26, 2023, 5:44 PM
88 points
13 comments24 min readLW link

Re­quest: stop ad­vanc­ing AI capabilities

So8resMay 26, 2023, 5:42 PM
154 points
24 comments1 min readLW link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotalMay 26, 2023, 3:57 PM
36 points
20 commentsLW link

The Amer­i­can In­for­ma­tion Revolu­tion in Global Perspective

jasoncrawfordMay 26, 2023, 12:39 PM
16 points
1 comment5 min readLW link
(rootsofprogress.org)

He­lio-Se­lenic Laser Te­lescope (in SPACE!?)

Alexander Gietelink OldenzielMay 26, 2023, 11:24 AM
8 points
2 comments4 min readLW link

[Question] Why is vi­o­lence against AI labs a taboo?

ArisCMay 26, 2023, 8:00 AM
−21 points
63 comments1 min readLW link

Where do you lie on two axes of world ma­nipu­la­bil­ity?

Max HMay 26, 2023, 3:04 AM
31 points
15 comments3 min readLW link

Some thoughts on au­tomat­ing al­ign­ment research

Lukas FinnvedenMay 26, 2023, 1:50 AM
30 points
4 comments6 min readLW link

[Question] What’s your view­point on the like­li­hood of GPT-5 be­ing able to au­tonomously cre­ate, train, and im­ple­ment an AI su­pe­rior to GPT-5?

Super AGIMay 26, 2023, 1:43 AM
7 points
15 comments1 min readLW link

Be­fore smart AI, there will be many mediocre or spe­cial­ized AIs

Lukas FinnvedenMay 26, 2023, 1:38 AM
58 points
14 comments9 min readLW link1 review

how hu­mans are aligned

bhauthMay 26, 2023, 12:09 AM
14 points
3 comments1 min readLW link

[Question] What ve­gan food re­sources have you found use­ful?

ElizabethMay 25, 2023, 10:46 PM
29 points
6 commentsLW link

Mob and Bailey

ScrewtapeMay 25, 2023, 10:14 PM
82 points
17 comments7 min readLW link1 review

Look At What’s In Front Of You (Con­clu­sion to The Nuts and Bolts of Nat­u­ral­ism)

LoganStrohlMay 25, 2023, 7:00 PM
50 points
1 comment2 min readLW link