How fast can we perform a for­ward pass?

jsteinhardt10 Jun 2022 23:30 UTC
53 points
9 comments15 min readLW link
(bounded-regret.ghost.io)

Sum­mary of “AGI Ruin: A List of Lethal­ities”

Stephen McAleese10 Jun 2022 22:35 UTC
44 points
2 comments8 min readLW link

How dan­ger­ous is hu­man-level AI?

Alex_Altair10 Jun 2022 17:38 UTC
21 points
4 comments8 min readLW link

Another plau­si­ble sce­nario of AI risk: AI builds mil­i­tary in­fras­truc­ture while col­lab­o­rat­ing with hu­mans, defects later.

avturchin10 Jun 2022 17:24 UTC
10 points
2 comments1 min readLW link

Leav­ing Google, Join­ing the Nu­cleic Acid Observatory

jefftk10 Jun 2022 17:00 UTC
114 points
4 comments3 min readLW link
(www.jefftk.com)

On The Spec­trum, On The Guest List: (v) The Fleur Room

party girl10 Jun 2022 14:50 UTC
8 points
1 comment14 min readLW link
(onthespectrumontheguestlist.substack.com)

Progress Re­port 6: get the tool working

Nathan Helm-Burger10 Jun 2022 11:18 UTC
4 points
0 comments2 min readLW link

[Question] Is AI Align­ment Im­pos­si­ble?

Heighn10 Jun 2022 10:08 UTC
3 points
3 comments1 min readLW link

I No Longer Believe In­tel­li­gence to be “Mag­i­cal”

DragonGod10 Jun 2022 8:58 UTC
27 points
34 comments6 min readLW link

[linkpost] The fi­nal AI bench­mark: BIG-bench

RomanS10 Jun 2022 8:53 UTC
25 points
21 comments1 min readLW link

[Question] Could Pa­tent-Trol­ling de­lay AI timelines?

Pablo Repetto10 Jun 2022 2:53 UTC
1 point
3 comments1 min readLW link

[Question] Kol­mogorov’s AI Forecast

interstice10 Jun 2022 2:36 UTC
9 points
1 comment1 min readLW link

Tao, Kont­se­vich & oth­ers on HLAI in Math

interstice10 Jun 2022 2:25 UTC
41 points
5 comments2 min readLW link
(www.youtube.com)

A plau­si­ble story about AI risk.

DeLesley Hutchins10 Jun 2022 2:08 UTC
16 points
2 comments4 min readLW link

Open Prob­lems in AI X-Risk [PAIS #5]

10 Jun 2022 2:08 UTC
59 points
6 comments36 min readLW link

[Question] why as­sume AGIs will op­ti­mize for fixed goals?

nostalgebraist10 Jun 2022 1:28 UTC
143 points
55 comments4 min readLW link2 reviews

Bureau­cracy of AIs

Logan Zoellner9 Jun 2022 23:03 UTC
17 points
6 comments14 min readLW link

You Only Get One Shot: an In­tu­ition Pump for Embed­ded Agency

Oliver Sourbut9 Jun 2022 21:38 UTC
24 points
4 comments2 min readLW link

[Question] Fore­stal­ling At­mo­spheric Ignition

Lone Pine9 Jun 2022 20:49 UTC
11 points
9 comments1 min readLW link

How Do Selec­tion The­o­rems Re­late To In­ter­pretabil­ity?

johnswentworth9 Jun 2022 19:39 UTC
60 points
14 comments3 min readLW link

Progress links and tweets, 2022-06-08

jasoncrawford9 Jun 2022 19:13 UTC
11 points
0 comments1 min readLW link
(rootsofprogress.org)

If no near-term al­ign­ment strat­egy, re­search should aim for the long-term

harsimony9 Jun 2022 19:10 UTC
7 points
1 comment1 min readLW link

Oper­a­tional­iz­ing two tasks in Gary Mar­cus’s AGI challenge

Bill Benzon9 Jun 2022 18:31 UTC
12 points
3 comments8 min readLW link

Why it’s bad to kill Grandma

dynomight9 Jun 2022 18:12 UTC
29 points
14 comments8 min readLW link
(dynomight.substack.com)

[Question] Model­ing hu­man­ity’s ro­bust­ness to GCRs?

Fer32dwt34r3dfsz9 Jun 2022 17:34 UTC
2 points
2 comments2 min readLW link

[Question] If there was a mil­len­nium equiv­a­lent prize for AI al­ign­ment, what would the prob­lems be?

Yair Halberstadt9 Jun 2022 16:56 UTC
17 points
4 comments1 min readLW link

Book Re­view: How the World Be­came Rich

Davis Kedrosky9 Jun 2022 16:55 UTC
14 points
0 comments10 min readLW link
(daviskedrosky.substack.com)

Covid 6/​9/​22: Nice

Zvi9 Jun 2022 16:30 UTC
26 points
2 comments12 min readLW link
(thezvi.wordpress.com)

Web­site For Yoda Timers

Adam Zerner9 Jun 2022 16:28 UTC
16 points
1 comment1 min readLW link

AI Could Defeat All Of Us Combined

HoldenKarnofsky9 Jun 2022 15:50 UTC
170 points
42 comments17 min readLW link
(www.cold-takes.com)

The “mind-body vi­cious cy­cle” model of RSI & back pain

Steven Byrnes9 Jun 2022 12:30 UTC
77 points
30 comments12 min readLW link

[Linkpost & Dis­cus­sion] AI Trained on 4Chan Be­comes ‘Hate Speech Ma­chine’ [and out­performs GPT-3 on Truth­fulQA Bench­mark?!]

Yitz9 Jun 2022 10:59 UTC
16 points
5 comments2 min readLW link
(www.vice.com)

Com­ment re­ply: my low-qual­ity thoughts on why CFAR didn’t get farther with a “real/​effi­ca­cious art of ra­tio­nal­ity”

AnnaSalamon9 Jun 2022 2:12 UTC
253 points
62 comments17 min readLW link1 review

To­day in AI Risk His­tory: The Ter­mi­na­tor (1984 film) was re­leased.

Impassionata9 Jun 2022 1:32 UTC
−3 points
6 comments1 min readLW link

There’s prob­a­bly a trade­off be­tween AI ca­pa­bil­ity and safety, and we should act like it

David Johnston9 Jun 2022 0:17 UTC
3 points
3 comments1 min readLW link

[Question] Has any­one ac­tu­ally tried to con­vince Terry Tao or other top math­e­mat­i­ci­ans to work on al­ign­ment?

P.8 Jun 2022 22:26 UTC
59 points
51 comments4 min readLW link

En­ti­tle­ment as a ma­jor am­plifier of unhappiness

VipulNaik8 Jun 2022 22:08 UTC
29 points
6 comments7 min readLW link

[Question] Silly On­line Rules

Gunnar_Zarncke8 Jun 2022 20:40 UTC
8 points
12 comments1 min readLW link

Un­typ­i­cal SIA

avturchin8 Jun 2022 14:23 UTC
5 points
3 comments2 min readLW link

Elic­it­ing La­tent Knowl­edge (ELK) - Distil­la­tion/​Summary

Marius Hobbhahn8 Jun 2022 13:18 UTC
69 points
2 comments21 min readLW link

Re­search Ques­tions from Stained Glass Windows

StefanHex8 Jun 2022 12:38 UTC
4 points
0 comments2 min readLW link

[Question] Steel­man­ning Marx­ism/​Communism

Suh_Prance_Alot8 Jun 2022 10:05 UTC
6 points
9 comments1 min readLW link

Stay­ing Split: Sa­ba­tini and So­cial Justice

[DEACTIVATED] Duncan Sabien8 Jun 2022 8:32 UTC
151 points
28 comments21 min readLW link

Less Wrong /​ ACX Bu­dapest June 11th Meetup

Richard Horvath8 Jun 2022 5:16 UTC
2 points
0 comments1 min readLW link

Pud­dle Tem­per­a­ture Alarm

jefftk8 Jun 2022 2:10 UTC
13 points
1 comment1 min readLW link
(www.jefftk.com)

Why I don’t be­lieve in doom

mukashi7 Jun 2022 23:49 UTC
6 points
30 comments4 min readLW link

“Pivotal Acts” means some­thing specific

Raemon7 Jun 2022 21:56 UTC
127 points
23 comments2 min readLW link

Em­bod­i­ment is Indis­pens­able for AGI

P. G. Keerthana Gopalakrishnan7 Jun 2022 21:31 UTC
6 points
1 comment6 min readLW link
(keerthanapg.com)

Stephen Wolfram’s ideas are un­der-appreciated

Kenny7 Jun 2022 20:09 UTC
20 points
52 comments1 min readLW link

Who mod­els the mod­els that model mod­els? An ex­plo­ra­tion of GPT-3′s in-con­text model fit­ting ability

Lovre7 Jun 2022 19:37 UTC
112 points
16 comments9 min readLW link