A brief col­lec­tion of Hin­ton’s re­cent com­ments on AGI risk

Kaj_SotalaMay 4, 2023, 11:31 PM
143 points
9 comments11 min readLW link

Robin Han­son and I talk about AI risk

KatjaGraceMay 4, 2023, 10:20 PM
39 points
8 comments1 min readLW link
(worldspiritsockpuppet.com)

Who reg­u­lates the reg­u­la­tors? We need to go be­yond the re­view-and-ap­proval paradigm

jasoncrawfordMay 4, 2023, 10:11 PM
122 points
29 comments13 min readLW link
(rootsofprogress.org)

Re­cur­sive Mid­dle Man­ager Hell: AI Edition

VojtaKovarikMay 4, 2023, 8:08 PM
30 points
11 comments2 min readLW link

AI risk/​re­ward: A sim­ple model

Nathan YoungMay 4, 2023, 7:25 PM
3 points
0 commentsLW link

Google “We Have No Moat, And Nei­ther Does OpenAI”

Chris_LeongMay 4, 2023, 6:23 PM
61 points
28 comments1 min readLW link
(www.semianalysis.com)

Try­ing to mea­sure AI de­cep­tion ca­pa­bil­ities us­ing tem­po­rary simu­la­tion fine-tuning

alenoachMay 4, 2023, 5:59 PM
4 points
0 comments7 min readLW link

[Linkpost]Trans­former-Based LM Sur­prisal Pre­dicts Hu­man Read­ing Times Best with About Two Billion Train­ing Tokens

Curtis HuebnerMay 4, 2023, 5:16 PM
10 points
1 comment1 min readLW link
(arxiv.org)

Clar­ify­ing and pre­dict­ing AGI

Richard_NgoMay 4, 2023, 3:55 PM
142 points
45 comments4 min readLW link

[Cross­post] AI X-risk in the News: How Effec­tive are Re­cent Me­dia Items and How is Aware­ness Chang­ing? Our New Sur­vey Re­sults.

otto.bartenMay 4, 2023, 2:09 PM
5 points
0 comments9 min readLW link
(forum.effectivealtruism.org)

AI #10: Code In­ter­preter and Ge­off Hinton

ZviMay 4, 2023, 2:00 PM
80 points
7 comments78 min readLW link
(thezvi.wordpress.com)

Ad­vice for in­ter­act­ing with busy people

Severin T. SeehrichMay 4, 2023, 1:31 PM
68 points
4 comments4 min readLW link

We don’t need AGI for an amaz­ing future

Karl von WendtMay 4, 2023, 12:10 PM
19 points
32 comments5 min readLW link

Has the Sym­bol Ground­ing Prob­lem just gone away?

RussellThorMay 4, 2023, 7:46 AM
12 points
3 comments1 min readLW link

Opinion merg­ing for AI control

David JohnstonMay 4, 2023, 2:43 AM
6 points
0 comments11 min readLW link

Un­der­stand­ing why illu­sion­ism does not deny the ex­is­tence of qualia

Mergimio H. DoefevmilMay 4, 2023, 2:13 AM
0 points
17 comments1 min readLW link

[New] Re­jected Con­tent Section

May 4, 2023, 1:43 AM
65 points
21 comments5 min readLW link

How MATS ad­dresses “mass move­ment build­ing” concerns

Ryan KiddMay 4, 2023, 12:55 AM
63 points
9 comments3 min readLW link

Mov­ing VPS Again

jefftkMay 4, 2023, 12:30 AM
9 points
2 comments1 min readLW link
(www.jefftk.com)

Prizes for ma­trix com­ple­tion problems

paulfchristianoMay 3, 2023, 11:30 PM
164 points
52 comments1 min readLW link
(www.alignment.org)

Align­ment Re­search @ EleutherAI

Curtis HuebnerMay 3, 2023, 10:45 PM
40 points
1 comment3 min readLW link
(blog.eleuther.ai)

«Boundaries/​Mem­branes» and AI safety compilation

ChipmonkMay 3, 2023, 9:41 PM
56 points
17 comments8 min readLW link

[Question] What con­straints does deep learn­ing place on al­ign­ment plans?

Garrett BakerMay 3, 2023, 8:40 PM
9 points
0 comments1 min readLW link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg CMay 3, 2023, 8:26 PM
25 points
12 commentsLW link

For­mal­iz­ing the “AI x-risk is un­likely be­cause it is ridicu­lous” argument

Christopher KingMay 3, 2023, 6:56 PM
48 points
17 comments3 min readLW link

[Question] List of no­table peo­ple who be­lieve in AI X-risk?

vlad.proexMay 3, 2023, 6:46 PM
14 points
4 comments1 min readLW link

[Question] LessWrong ex­port­ing?

axiomAdministratorMay 3, 2023, 6:34 PM
0 points
3 comments1 min readLW link

Progress links and tweets, 2023-05-03

jasoncrawfordMay 3, 2023, 4:23 PM
13 points
0 comments2 min readLW link
(rootsofprogress.org)

Per­son­hood is a Reli­gious Belief

jan SijanMay 3, 2023, 4:16 PM
−41 points
28 comments6 min readLW link

Slow­ing AI: Crunch time

Zach Stein-PerlmanMay 3, 2023, 3:00 PM
11 points
1 comment2 min readLW link

Find­ing Neu­rons in a Haystack: Case Stud­ies with Sparse Probing

May 3, 2023, 1:30 PM
33 points
6 comments2 min readLW link1 review
(arxiv.org)

Monthly Roundup #6: May 2023

ZviMay 3, 2023, 12:50 PM
31 points
12 comments24 min readLW link
(thezvi.wordpress.com)

[Question] How much do per­sonal bi­ases in risk as­sess­ment af­fect as­sess­ment of AI risks?

Gordon Seidoh WorleyMay 3, 2023, 6:12 AM
10 points
8 comments1 min readLW link

Com­mu­ni­ca­tion strate­gies for autism, with examples

stoneflyMay 3, 2023, 5:25 AM
16 points
2 comments7 min readLW link

Un­der­stand how other peo­ple think: a the­ory of wor­ld­views.

spencergMay 3, 2023, 3:57 AM
2 points
8 commentsLW link

“Copi­lot” type AI in­te­gra­tion could lead to train­ing data needed for AGI

anithiteMay 3, 2023, 12:57 AM
8 points
0 comments2 min readLW link

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubKMay 2, 2023, 10:50 PM
10 points
0 commentsLW link

A Case for the Least For­giv­ing Take On Alignment

Thane RuthenisMay 2, 2023, 9:34 PM
100 points
85 comments22 min readLW link

Are Emer­gent Abil­ities of Large Lan­guage Models a Mirage? [linkpost]

Matthew BarnettMay 2, 2023, 9:01 PM
53 points
19 comments1 min readLW link
(arxiv.org)

Does descal­ing a ket­tle help? The­ory and practice

philhMay 2, 2023, 8:20 PM
35 points
25 comments8 min readLW link
(reasonableapproximation.net)

Avoid­ing xrisk from AI doesn’t mean fo­cus­ing on AI xrisk

Stuart_ArmstrongMay 2, 2023, 7:27 PM
67 points
7 comments3 min readLW link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

May 2, 2023, 6:41 PM
32 points
0 comments5 min readLW link
(newsletter.safe.ai)

My best sys­tem yet: text-based pro­ject management

jtMay 2, 2023, 5:44 PM
6 points
8 comments5 min readLW link

[Question] What’s the state of AI safety in Ja­pan?

ChristianKlMay 2, 2023, 5:06 PM
5 points
1 comment1 min readLW link

Five Wor­lds of AI (by Scott Aaron­son and Boaz Barak)

mishkaMay 2, 2023, 1:23 PM
22 points
6 comments1 min readLW link1 review
(scottaaronson.blog)

Sys­tems that can­not be un­safe can­not be safe

DavidmanheimMay 2, 2023, 8:53 AM
62 points
27 comments2 min readLW link

AGI safety ca­reer advice

Richard_NgoMay 2, 2023, 7:36 AM
132 points
24 comments13 min readLW link

An Im­pos­si­bil­ity Proof Rele­vant to the Shut­down Prob­lem and Corrigibility

AudereMay 2, 2023, 6:52 AM
66 points
13 comments9 min readLW link

Some Thoughts on Virtue Ethics for AIs

peligrietzerMay 2, 2023, 5:46 AM
83 points
8 comments4 min readLW link

Tech­nolog­i­cal un­em­ploy­ment as an­other test for ra­tio­nal­ist winning

RomanHaukssonMay 2, 2023, 4:16 AM
14 points
5 comments1 min readLW link