RSS

Ma­chine In­tel­li­gence Re­search In­sti­tute (MIRI)

TagLast edit: 7 Mar 2021 17:11 UTC by plex

The Machine Intelligence Research Institute, formerly known as the Singularity Institute for Artificial Intelligence (not to be confused with Singularity University) is a non-profit research organization devoted to reducing existential risk from unfriendly artificial intelligence and understanding problems related to friendly artificial intelligence. Eliezer Yudkowsky was one of the early founders and continues to work there as a Research Fellow. The Machine Intelligence Research Institute created and currently owns the LessWrong domain.

External Links

See Also

On MIRI’s new re­search directions

Rob Bensinger22 Nov 2018 23:42 UTC
53 points
12 comments1 min readLW link
(intelligence.org)

EA Fo­rum AMA—MIRI’s Buck Shlegeris

Rob Bensinger15 Nov 2019 23:27 UTC
30 points
0 comments2 min readLW link
(forum.effectivealtruism.org)

Thoughts on the Sin­gu­lar­ity In­sti­tute (SI)

HoldenKarnofsky11 May 2012 4:31 UTC
325 points
1,287 comments29 min readLW link

My ex­pe­rience at and around MIRI and CFAR (in­spired by Zoe Curzi’s writeup of ex­pe­riences at Lev­er­age)

jessicata16 Oct 2021 21:28 UTC
61 points
948 comments22 min readLW link

The Rocket Align­ment Problem

Eliezer Yudkowsky4 Oct 2018 0:38 UTC
176 points
41 comments15 min readLW link

What I’ll be do­ing at MIRI

evhub12 Nov 2019 23:19 UTC
107 points
6 comments1 min readLW link

AIRCS Work­shop: How I failed to be re­cruited at MIRI.

ArthurRainbow7 Jan 2020 1:03 UTC
85 points
15 comments23 min readLW link

Tak­ing the reins at MIRI

So8res3 Jun 2015 23:52 UTC
93 points
11 comments3 min readLW link

On mo­ti­va­tions for MIRI’s highly re­li­able agent de­sign research

jessicata29 Jan 2017 19:34 UTC
27 points
1 comment5 min readLW link

Out­side View(s) and MIRI’s FAI Endgame

Wei_Dai28 Aug 2013 23:27 UTC
21 points
60 comments2 min readLW link

Daniel Dewey on MIRI’s Highly Reli­able Agent De­sign Work

lifelonglearner9 Jul 2017 4:35 UTC
15 points
5 comments1 min readLW link
(effective-altruism.com)

My cur­rent take on the Paul-MIRI dis­agree­ment on al­ignabil­ity of messy AI

jessicata29 Jan 2017 20:52 UTC
21 points
0 comments10 min readLW link

Re­ply to Holden on The Sin­gu­lar­ity Institute

lukeprog10 Jul 2012 23:20 UTC
69 points
215 comments26 min readLW link

Re­quest for “Tests” for the MIRI Re­search Guide

Hazard13 Mar 2018 23:22 UTC
28 points
14 comments1 min readLW link

[Question] Is it harder to be­come a MIRI math­e­mat­i­cian in 2019 com­pared to in 2013?

riceissa29 Oct 2019 3:28 UTC
65 points
3 comments3 min readLW link

MIRI Re­search Guide

So8res7 Nov 2014 19:11 UTC
71 points
63 comments16 min readLW link

Harper’s Magaz­ine ar­ti­cle on LW/​MIRI/​CFAR and Ethereum

gwern12 Dec 2014 20:34 UTC
75 points
154 comments14 min readLW link

Re­sults from MIRI’s De­cem­ber workshop

Benya15 Jan 2014 22:29 UTC
71 points
43 comments6 min readLW link

Com­put­er­phile dis­cusses MIRI’s “Log­i­cal In­duc­tion” paper

Parth Athley4 Oct 2018 16:00 UTC
43 points
2 comments1 min readLW link
(www.youtube.com)

Book Re­view: Lin­ear Alge­bra Done Right (MIRI course list)

So8res17 Feb 2014 20:52 UTC
56 points
15 comments7 min readLW link

Book Re­view: Naïve Set The­ory (MIRI course list)

So8res30 Sep 2013 16:09 UTC
47 points
21 comments5 min readLW link

MIRI’s Approach

So8res30 Jul 2015 20:03 UTC
49 points
59 comments16 min readLW link

Book Re­view: Ba­sic Cat­e­gory The­ory for Com­puter Scien­tists (MIRI course list)

So8res19 Sep 2013 3:06 UTC
51 points
23 comments4 min readLW link

MIRI’s tech­ni­cal re­search agenda

So8res23 Dec 2014 18:45 UTC
54 points
52 comments3 min readLW link

Book Re­view: Cog­ni­tive Science (MIRI course list)

So8res9 Sep 2013 16:39 UTC
43 points
8 comments15 min readLW link

New pa­per from MIRI: “Toward ideal­ized de­ci­sion the­ory”

So8res16 Dec 2014 22:27 UTC
41 points
22 comments3 min readLW link

Book Re­view: Heuris­tics and Bi­ases (MIRI course list)

So8res2 Sep 2013 15:37 UTC
41 points
22 comments20 min readLW link

How does MIRI Know it Has a Medium Prob­a­bil­ity of Suc­cess?

Peter Wildeford1 Aug 2013 11:42 UTC
27 points
146 comments1 min readLW link

Notes/​blog posts on two re­cent MIRI papers

Quinn14 Jul 2013 23:11 UTC
35 points
3 comments1 min readLW link

MIRI course list book re­views, part 1: Gödel, Escher, Bach

So8res1 Sep 2013 17:40 UTC
25 points
10 comments3 min readLW link

[LINK] Ar­ti­cle in the Guardian about CSER, men­tions MIRI and pa­per­clip AI

Sarokrae30 Aug 2014 14:04 UTC
27 points
17 comments1 min readLW link

Notes on log­i­cal pri­ors from the MIRI workshop

cousin_it15 Sep 2013 22:43 UTC
30 points
47 comments7 min readLW link

An In­tro­duc­tion to Löb’s The­o­rem in MIRI Research

orthonormal23 Mar 2015 22:22 UTC
27 points
27 comments2 min readLW link

Book Re­view: Naive Set The­ory (MIRI re­search guide)

David_Kristoffersson14 Aug 2015 22:08 UTC
21 points
13 comments6 min readLW link

[link] MIRI’s 2015 in review

Kaj_Sotala3 Aug 2016 12:03 UTC
17 points
0 comments1 min readLW link

Map of (old) MIRI’s Re­search Agendas

Jsevillamol7 Jun 2019 7:22 UTC
9 points
1 comment1 min readLW link

Work­ing at MIRI: An in­ter­view with Malo Bourgon

SoerenMind1 Nov 2015 12:54 UTC
13 points
2 comments4 min readLW link

[Question] Was CFAR always in­tended to be a dis­tinct or­ga­ni­za­tion from MIRI?

Evan_Gaensbauer27 May 2019 16:58 UTC
7 points
3 comments1 min readLW link

MIRI: De­ci­sions are for mak­ing bad out­comes inconsistent

Rob Bensinger9 Apr 2017 3:42 UTC
14 points
6 comments1 min readLW link
(intelligence.org)

Steel­man­ning MIRI critics

fowlertm19 Aug 2014 3:14 UTC
8 points
67 comments1 min readLW link

Some MIRI Work­shop Stuff

abramdemski12 Aug 2013 2:55 UTC
12 points
8 comments1 min readLW link

MIRI strategy

ColonelMustard28 Oct 2013 15:33 UTC
3 points
96 comments2 min readLW link

LessWrong and Miri men­tioned in ma­jor Ger­man news­pa­per’s ar­ti­cle on Neoreactionaries

-necate-14 Apr 2017 8:20 UTC
6 points
13 comments1 min readLW link

Rod­ney Brooks talks about Evil AI and men­tions MIRI [LINK]

ike12 Nov 2014 4:50 UTC
6 points
7 comments1 min readLW link

Why GiveWell can’t recom­mend MIRI or any­thing like it

Bound_up29 Nov 2016 15:29 UTC
1 point
13 comments1 min readLW link

Video Q&A with Sin­gu­lar­ity In­sti­tute Ex­ec­u­tive Director

lukeprog10 Dec 2011 11:27 UTC
56 points
124 comments15 min readLW link

Ben Go­ertzel: The Sin­gu­lar­ity In­sti­tute’s Scary Idea (and Why I Don’t Buy It)

Paul Crowley30 Oct 2010 9:31 UTC
42 points
442 comments1 min readLW link

Sin­gu­lar­ity In­sti­tute is now Ma­chine In­tel­li­gence Re­search Institute

Kaj_Sotala31 Jan 2013 8:25 UTC
51 points
99 comments1 min readLW link

Sin­gu­lar­ity In­sti­tute Ex­ec­u­tive Direc­tor Q&A #2

lukeprog6 Jan 2012 3:40 UTC
30 points
39 comments4 min readLW link

In­ter­view with Sin­gu­lar­ity In­sti­tute Re­search Fel­low Luke Muehlhauser

MichaelAnissimov15 Sep 2011 10:23 UTC
19 points
67 comments1 min readLW link

Holden Karnofsky’s Sin­gu­lar­ity In­sti­tute cri­tique: Is SI the kind of or­ga­ni­za­tion we want to bet on?

Paul Crowley11 May 2012 7:25 UTC
19 points
11 comments8 min readLW link

Holden Karnofsky’s Sin­gu­lar­ity In­sti­tute Ob­jec­tion 2

Paul Crowley11 May 2012 7:18 UTC
18 points
41 comments8 min readLW link

Holden Karnofsky’s Sin­gu­lar­ity In­sti­tute Ob­jec­tion 1

Paul Crowley11 May 2012 7:16 UTC
12 points
61 comments3 min readLW link

Holden Karnofsky’s Sin­gu­lar­ity In­sti­tute Ob­jec­tion 3

Paul Crowley11 May 2012 7:19 UTC
8 points
8 comments1 min readLW link

Holden Karnofsky’s Sin­gu­lar­ity In­sti­tute cri­tique: other objections

Paul Crowley11 May 2012 7:22 UTC
6 points
6 comments1 min readLW link

SIAI—An Examination

BrandonReinhart2 May 2011 7:08 UTC
182 points
207 comments13 min readLW link

SIAI vs. FHI achieve­ments, 2008-2010

Kaj_Sotala25 Sep 2011 11:42 UTC
40 points
62 comments4 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

Two clar­ifi­ca­tions about “Strate­gic Back­ground”

Rob Bensinger12 Apr 2018 2:11 UTC
38 points
6 comments1 min readLW link

Timeline of Ma­chine In­tel­li­gence Re­search Institute

riceissa15 Jul 2017 16:57 UTC
9 points
0 comments1 min readLW link
(timelines.issarice.com)

MIRI lo­ca­tion op­ti­miza­tion (and re­lated top­ics) discussion

Rob Bensinger8 May 2021 23:12 UTC
137 points
164 comments12 min readLW link

Tran­scrip­tion of Eliezer’s Jan­uary 2010 video Q&A

curiousepic14 Nov 2011 17:02 UTC
109 points
9 comments56 min readLW link

MIRI’s 2014 Sum­mer Match­ing Challenge

lukeprog7 Aug 2014 20:03 UTC
26 points
32 comments2 min readLW link

Cal­ling all MIRI sup­port­ers for unique May 6 giv­ing op­por­tu­nity!

lukeprog4 May 2014 23:45 UTC
34 points
48 comments2 min readLW link

MIRI’s Win­ter 2013 Match­ing Challenge

lukeprog17 Dec 2013 20:41 UTC
34 points
37 comments3 min readLW link

MIRI’s 2013 Sum­mer Match­ing Challenge

lukeprog23 Jul 2013 19:05 UTC
38 points
123 comments2 min readLW link

Ar­bital scrape

emmab6 Jun 2019 23:11 UTC
89 points
23 comments1 min readLW link

Yud­kowsky on AGI ethics

Rob Bensinger19 Oct 2017 23:13 UTC
49 points
6 comments2 min readLW link

The Power of Reinforcement

lukeprog21 Jun 2012 13:42 UTC
154 points
474 comments4 min readLW link

Cur­rent AI Safety Roles for Soft­ware Engineers

ozziegooen9 Nov 2018 20:57 UTC
69 points
9 comments4 min readLW link

2017 AI Safety Liter­a­ture Re­view and Char­ity Com­par­i­son

Larks24 Dec 2017 18:52 UTC
41 points
5 comments23 min readLW link

Reflec­tion in Prob­a­bil­is­tic Logic

Eliezer Yudkowsky24 Mar 2013 16:37 UTC
107 points
172 comments3 min readLW link

I Vouch For MIRI

Zvi17 Dec 2017 17:50 UTC
34 points
9 comments5 min readLW link
(thezvi.wordpress.com)

The Sin­gu­lar­ity In­sti­tute needs re­mote re­searchers (writ­ing skill not re­quired)

lukeprog5 Feb 2012 22:02 UTC
87 points
16 comments1 min readLW link

The Sin­gu­lar­ity In­sti­tute’s Ar­ro­gance Problem

lukeprog18 Jan 2012 22:30 UTC
84 points
308 comments1 min readLW link

An Un­trol­lable Math­e­mat­i­cian Illustrated

abramdemski20 Mar 2018 0:00 UTC
153 points
38 comments1 min readLW link

Op­por­tu­ni­ties for in­di­vi­d­ual donors in AI safety

alexflint31 Mar 2018 18:37 UTC
30 points
3 comments11 min readLW link

AI Sum­mer Fel­lows Program

colm21 Mar 2018 15:32 UTC
21 points
0 comments1 min readLW link

SIAI Fundraising

BrandonReinhart26 Apr 2011 8:35 UTC
78 points
120 comments6 min readLW link

MIRI’s 2019 Fundraiser

Malo3 Dec 2019 1:16 UTC
55 points
0 comments9 min readLW link

MIRI’s 2018 Fundraiser

Malo27 Nov 2018 5:30 UTC
60 points
1 comment7 min readLW link

AI Sum­mer Fel­lows Program

colm16 Mar 2018 21:57 UTC
20 points
2 comments1 min readLW link

The Sin­gu­lar­ity Wars

JoshuaFox14 Feb 2013 9:44 UTC
82 points
25 comments3 min readLW link

Bot­world: a cel­lu­lar au­toma­ton for study­ing self-mod­ify­ing agents em­bed­ded in their environment

So8res12 Apr 2014 0:56 UTC
78 points
55 comments7 min readLW link

Less Wrong Q&A with Eliezer Yud­kowsky: Video Answers

MichaelGR7 Jan 2010 4:40 UTC
48 points
99 comments1 min readLW link

MIRI Sum­mer Fel­lows Program

colm15 May 2019 0:28 UTC
48 points
3 comments2 min readLW link

MIRI’s 2015 Sum­mer Fundraiser!

So8res19 Aug 2015 0:27 UTC
64 points
45 comments5 min readLW link

Help Fund Luke­prog at SIAI

Eliezer Yudkowsky24 Aug 2011 7:16 UTC
63 points
278 comments1 min readLW link

Bet Pay­off 1: OpenPhil/​MIRI Grant Increase

Ben Pace9 Nov 2017 18:31 UTC
15 points
11 comments1 min readLW link

MIRI’s 2017 Fundraiser

Malo1 Dec 2017 13:45 UTC
19 points
4 comments13 min readLW link

So You Want to Save the World

lukeprog1 Jan 2012 7:39 UTC
54 points
149 comments12 min readLW link

Is com­mu­nity-col­lab­o­ra­tive ar­ti­cle pro­duc­tion pos­si­ble?

lukeprog21 Mar 2012 20:10 UTC
57 points
46 comments3 min readLW link

New fo­rum for MIRI re­search: In­tel­li­gent Agent Foun­da­tions Forum

orthonormal20 Mar 2015 0:35 UTC
53 points
43 comments1 min readLW link

Ex­is­ten­tial Risk and Public Relations

multifoliaterose15 Aug 2010 7:16 UTC
41 points
628 comments5 min readLW link

2012 Win­ter Fundraiser for the Sin­gu­lar­ity Institute

lukeprog6 Dec 2012 22:41 UTC
48 points
127 comments3 min readLW link

SIAI’s Short-Term Re­search Program

XiXiDu24 Jun 2011 11:43 UTC
40 points
48 comments2 min readLW link

Ra­tion­al­ity, Sin­gu­lar­ity, Method, and the Mainstream

Mitchell_Porter22 Mar 2011 12:06 UTC
52 points
35 comments5 min readLW link

Call for new SIAI Visit­ing Fel­lows, on a rol­ling basis

AnnaSalamon1 Dec 2009 1:42 UTC
36 points
272 comments2 min readLW link

MIRI Fundraiser: Why now matters

So8res24 Jul 2015 22:38 UTC
42 points
4 comments2 min readLW link

Re­vis­it­ing SI’s 2011 strate­gic plan: How are we do­ing?

lukeprog16 Jul 2012 9:10 UTC
46 points
20 comments7 min readLW link

Why I am not cur­rently work­ing on the AAMLS agenda

jessicata1 Jun 2017 17:57 UTC
28 points
1 comment5 min readLW link

MIRI’s 2017 Fundraiser

Malo7 Dec 2017 21:47 UTC
27 points
5 comments13 min readLW link

MIRI’s 2015 Win­ter Fundraiser!

So8res9 Dec 2015 19:00 UTC
43 points
24 comments7 min readLW link

Sin­gu­lar­ity In­sti­tute Strate­gic Plan 2011

MichaelAnissimov26 Aug 2011 23:34 UTC
45 points
21 comments1 min readLW link

Tal­linn-Evans $125,000 Sin­gu­lar­ity Challenge

Kaj_Sotala26 Dec 2010 11:21 UTC
38 points
378 comments2 min readLW link

What I would like the SIAI to publish

XiXiDu1 Nov 2010 14:07 UTC
36 points
225 comments3 min readLW link

Eval­u­at­ing the fea­si­bil­ity of SI’s plan

JoshuaFox10 Jan 2013 8:17 UTC
38 points
188 comments4 min readLW link

GiveWell.org in­ter­views SIAI

Paul Crowley5 May 2011 16:29 UTC
38 points
17 comments1 min readLW link

Be a Visit­ing Fel­low at the Sin­gu­lar­ity Institute

AnnaSalamon19 May 2010 8:00 UTC
38 points
171 comments2 min readLW link

Q&A with new Ex­ec­u­tive Direc­tor of Sin­gu­lar­ity Institute

lukeprog7 Nov 2011 4:58 UTC
33 points
182 comments1 min readLW link

Vingean Reflec­tion: Reli­able Rea­son­ing for Self-Im­prov­ing Agents

So8res15 Jan 2015 22:47 UTC
36 points
5 comments9 min readLW link

Suggest al­ter­nate names for the “Sin­gu­lar­ity In­sti­tute”

lukeprog19 Jun 2012 4:42 UTC
33 points
159 comments1 min readLW link

MIRI’s 2016 Fundraiser

So8res25 Sep 2016 16:55 UTC
34 points
13 comments6 min readLW link

Build­ing to­ward a Friendly AI team

lukeprog6 Jun 2012 18:57 UTC
37 points
96 comments3 min readLW link

MIRI AMA plus updates

Rob Bensinger11 Oct 2016 23:52 UTC
18 points
1 comment1 min readLW link

[Link] Nate Soares is an­swer­ing ques­tions about MIRI at the EA Forum

Rob Bensinger11 Jun 2015 0:27 UTC
29 points
1 comment9 min readLW link

Should I be­lieve what the SIAI claims?

XiXiDu12 Aug 2010 14:33 UTC
22 points
632 comments3 min readLW link

MIRI needs an Office Man­ager (aka Force Mul­ti­plier)

alexvermeer3 Jul 2015 1:10 UTC
24 points
6 comments7 min readLW link

Sin­gu­lar­ity In­sti­tute $100K Challenge Grant /​ 2009 Dona­tions Reminder

Eliezer Yudkowsky30 Dec 2009 0:36 UTC
16 points
18 comments1 min readLW link

SIAI call for skil­led vol­un­teers and po­ten­tial interns

AnnaSalamon26 Apr 2009 5:56 UTC
20 points
3 comments2 min readLW link

The Sin­gu­lar­ity In­sti­tute has started pub­lish­ing monthly progress reports

John_Maxwell5 Mar 2012 8:19 UTC
28 points
23 comments1 min readLW link

NPR show All Things Con­sid­ered on the Sin­gu­lar­ity and SIAI

arundelo11 Jan 2011 22:58 UTC
32 points
15 comments1 min readLW link

A Schol­arly AI Risk Wiki

lukeprog25 May 2012 20:53 UTC
28 points
57 comments5 min readLW link

MIRI: 2020 Up­dates and Strategy

Rob Bensinger23 Dec 2020 21:27 UTC
76 points
0 comments1 min readLW link
(intelligence.org)

MIRI Dona­tion Col­lab­o­ra­tion Station

Skeptityke29 Apr 2014 14:11 UTC
30 points
14 comments2 min readLW link

[Question] Is MIRI ac­tu­ally hiring and does Buck Sh­legeris still work for you?

seed13 Feb 2021 10:26 UTC
19 points
4 comments1 min readLW link

Please ad­vise the Sin­gu­lar­ity In­sti­tute with your do­main-spe­cific ex­per­tise!

lukeprog15 Mar 2012 20:13 UTC
26 points
33 comments1 min readLW link
No comments.