RSS

Rob Bensinger

Karma: 21,028

Communications lead at MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer’s.

AI Views Snapshots

Rob Bensinger13 Dec 2023 0:45 UTC
141 points
61 comments1 min readLW link

An ar­tifi­cially struc­tured ar­gu­ment for ex­pect­ing AGI ruin

Rob Bensinger7 May 2023 21:52 UTC
91 points
26 comments19 min readLW link

AGI ruin mostly rests on strong claims about al­ign­ment and de­ploy­ment, not about society

Rob Bensinger24 Apr 2023 13:06 UTC
70 points
8 comments6 min readLW link

The ba­sic rea­sons I ex­pect AGI ruin

Rob Bensinger18 Apr 2023 3:37 UTC
187 points
72 comments14 min readLW link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

Rob Bensinger11 Apr 2023 4:53 UTC
136 points
12 comments1 min readLW link

Yud­kowsky on AGI risk on the Ban­kless podcast

Rob Bensinger13 Mar 2023 0:42 UTC
83 points
5 comments1 min readLW link

Ele­ments of Ra­tion­al­ist Discourse

Rob Bensinger12 Feb 2023 7:58 UTC
215 points
47 comments3 min readLW link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

7 Dec 2022 19:46 UTC
102 points
17 comments5 min readLW link

A challenge for AGI or­ga­ni­za­tions, and a challenge for readers

1 Dec 2022 23:11 UTC
300 points
33 comments2 min readLW link

A com­mon failure for foxes

Rob Bensinger14 Oct 2022 22:50 UTC
47 points
7 comments2 min readLW link

ITT-pass­ing and ci­vil­ity are good; “char­ity” is bad; steel­man­ning is niche

Rob Bensinger5 Jul 2022 0:15 UTC
160 points
36 comments6 min readLW link1 review

The in­or­di­nately slow spread of good AGI con­ver­sa­tions in ML

Rob Bensinger21 Jun 2022 16:09 UTC
173 points
62 comments8 min readLW link

On sav­ing one’s world

Rob Bensinger17 May 2022 19:53 UTC
192 points
4 comments1 min readLW link

Late 2021 MIRI Con­ver­sa­tions: AMA /​ Discussion

Rob Bensinger28 Feb 2022 20:03 UTC
119 points
199 comments1 min readLW link

An­i­mal welfare EA and per­sonal dietary options

Rob Bensinger5 Jan 2022 18:53 UTC
37 points
32 comments3 min readLW link

Some ab­stract, non-tech­ni­cal rea­sons to be non-max­i­mally-pes­simistic about AI alignment

Rob Bensinger12 Dec 2021 2:08 UTC
70 points
35 comments7 min readLW link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

9 Dec 2021 21:23 UTC
108 points
30 comments31 min readLW link

Leav­ing Orbit

Rob Bensinger6 Dec 2021 21:48 UTC
50 points
17 comments1 min readLW link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

11 Nov 2021 3:01 UTC
328 points
251 comments34 min readLW link1 review

Ex­cerpts from Veyne’s “Did the Greeks Believe in Their Myths?”

Rob Bensinger8 Nov 2021 20:23 UTC
24 points
1 comment16 min readLW link