I Vouch For MIRI

Another take with more links: AI: A Rea­son to Worry, A Rea­son to Donate

I have made a $10,000 dona­tion to the Ma­chine In­tel­li­gence Re­search In­sti­tute (MIRI) as part of their win­ter fundraiser. This is the best or­ga­ni­za­tion I know of to donate money to, by a wide mar­gin, and I en­courage oth­ers to also donate. This be­lief comes from a com­bi­na­tion of pub­lic in­for­ma­tion, pri­vate in­for­ma­tion and my own anal­y­sis. This post will share some of my pri­vate in­for­ma­tion and anal­y­sis to help oth­ers make the best de­ci­sions.

I con­sider AI Safety the most im­por­tant, ur­gent and un­der-funded cause. If your pri­vate in­for­ma­tion and anal­y­sis says an­other AI Safety or­ga­ni­za­tion is a bet­ter place to give, give to there. I be­lieve many AI Safety or­ga­ni­za­tions do good work. If you have the tal­ent and skills, and can get in­volved di­rectly, or get oth­ers who have the tal­ent and skills in­volved di­rectly, that’s even bet­ter than donat­ing money.

If you do not know about AI Safety and un­friendly ar­tifi­cial gen­eral in­tel­li­gence, I en­courage you to read about them. If you’re up for a book, read this one.

If you de­cide you care about other causes more, donate to those causes in­stead, in the way your anal­y­sis says is most effec­tive. Think for your­self, do and share your own anal­y­sis, and con­tribute as di­rectly as pos­si­ble.

I

I am very con­fi­dent in the fol­low­ing facts about ar­tifi­cial gen­eral in­tel­li­gence. None of my con­clu­sions in this sec­tion re­quire my pri­vate in­for­ma­tion.

Hu­man­ity is likely to de­velop ar­tifi­cial gen­eral in­tel­li­gence (AGI) vastly smarter and more pow­er­ful than hu­mans. We are un­likely to know that far in ad­vance when this is about to hap­pen. There is wide dis­agree­ment and un­cer­tainty on how long this will take, but cer­tainly there is sub­stan­tial chance this hap­pens within our life­times.

What­ever your pre­vi­ous be­liefs, the events of the last year, in­clud­ing AlphaGo Zero, should con­vince you that AGI is more likely to hap­pen, and more likely to hap­pen soon.

If we do build an AGI, its ac­tions will de­ter­mine what is done with the uni­verse.

If the first such AGI we build turns out to be an un­friendly AI that is op­ti­miz­ing for some­thing other than hu­mans and hu­man val­ues, all value in the uni­verse will be de­stroyed. We are made of atoms that could be used for some­thing else.

If the first such AGI we build turns out to care about hu­mans and hu­man val­ues, the uni­verse will be a place of value many or­ders of mag­ni­tude greater than it is now.

Al­most all AGIs that could be con­structed care about some­thing other than hu­mans and hu­man val­ues, and would cre­ate a uni­verse with zero value. Mindspace is deep and wide, and al­most all of it does not care about us.

The de­fault out­come, if we do not work hard and care­fully now on AGI safety, is for AGI to wipe out all value in the uni­verse.

AI Safety is a hard prob­lem on many lev­els. Solv­ing it is much harder than it looks even with the best of in­ten­tions, and in­cen­tives are likely to con­spire to give those in­volved very bad per­sonal in­cen­tives. Without se­cu­rity mind­set, value al­ign­ment and tons of ad­vance work, chances of suc­cess are very low.

We are cur­rently spend­ing lu­dicrously lit­tle time, at­ten­tion and money on this prob­lem.

For space rea­sons I am not fur­ther jus­tify­ing these claims here. Ja­cob’s post has more links.

II

In these next two sec­tions I will share what I can of my own pri­vate in­for­ma­tion and anal­y­sis.

I know many prin­ci­ples at MIRI, in­clud­ing se­nior re­search fel­low Eliezer Yud­kowsky and ex­ec­u­tive di­rec­tor Nate Sores. They are brilli­ant, and are as ded­i­cated as one can be to the cause of AI Safety and en­sur­ing a good fu­ture for the uni­verse. I trust them, based on per­sonal ex­pe­rience with them, to do what they be­lieve is best to achieve these goals.

I be­lieve they have already done much ex­cep­tional and valuable work. I have also read many of their re­cent pa­pers and found them ex­cel­lent.

MIRI has been in­valuable in lay­ing the ground­work for this field. This is true both on the level of the field ex­ist­ing at all, and also on the level of think­ing in ways that might ac­tu­ally work.

Even to­day, most who talk about AI Safety sug­gest strate­gies that have es­sen­tially no chance of suc­cess, but at least they are talk­ing about it at all. MIRI is a large part of why they’re talk­ing at all. I be­lieve that some­thing as sim­ple as these Deep­Mind AI Safety test en­vi­ron­ments is good, helping re­searchers un­der­stand there is a prob­lem much more deadly than al­gorith­mic dis­crim­i­na­tion. The risk is that re­searchers will re­al­ize a prob­lem ex­ists, then think ‘I’ve solved these prob­lems, so I’ve done the AI Safety thing’ when we need the ac­tual thing the most.

From the be­gin­ning, MIRI un­der­stood the AI Safety prob­lem is hard, re­quiring difficult high-pre­ci­sion think­ing, and long term de­vel­op­ment of new ideas and tools. MIRI con­tinues to fight to turn con­cern about ‘AI Safety’ into con­cern about AI Safety.

AI Safety is so hard to un­der­stand that Eliezer Yud­kowsky de­cided he needed to teach the world the art of ra­tio­nal­ity so we could then un­der­stand AI Safety. He did ex­actly that, which is why this blog ex­ists.

MIRI is de­vel­op­ing tech­niques to make AGIs we can un­der­stand and pre­dict and prove things about. MIRI seeks to un­der­stand how agents can and should think. If AGI comes from such mod­els, this is a huge boost to our chances of suc­cess. MIRI is also work­ing on tech­niques to make ma­chine learn­ing based agents safer, in case that path leads to AGI first. Both tasks are valuable, but I am es­pe­cially ex­cited by MIRI’s work on logic.

III

Eliezer’s model was that if we teach peo­ple to think, then they can think about AI.

What I’ve come to re­al­ize is that when we try to think about AI, we also learn how to think in gen­eral.

The pa­per that con­vinced OpenPhil to in­crease its grant to MIRI was about Log­i­cal In­duc­tion. That pa­per was im­pres­sive and worth un­der­stand­ing, but even more im­pres­sive and valuable in my eyes is MIRI’s work on Func­tional De­ci­sion The­ory. This is vi­tal to cre­at­ing an AGI that makes de­ci­sions, and has been in­valuable to me as a hu­man mak­ing de­ci­sions. It gave me a much bet­ter way to un­der­stand, work with and ex­plain how to think about mak­ing de­ci­sions.

Our so­ciety be­lieves in and praises Causal De­ci­sion The­ory, dis­miss­ing other con­sid­er­a­tions as ir­ra­tional. This has been a dis­aster on a level hard to com­pre­hend. It de­stroys the foun­da­tions of civ­i­liza­tion. If we could spread prac­ti­cal, hu­man use of Func­tional De­ci­sion The­ory, and de­bate on that ba­sis, we could get out of much of our cur­rent mess. Thanks to MIRI, we have a strong for­mal state­ment of Func­tional De­ci­sion The­ory.

When­ever I think about AI or AI Safety, read AI pa­pers or try to de­sign AI sys­tems, I learn how to think as a hu­man. As a side effect of MIRI’s work, my think­ing, and es­pe­cially my abil­ity to for­mal­ize, ex­plain and share my think­ing, has been greatly ad­vanced. Their work even this year has been a great help.

MIRI does ba­sic re­search into how to think. We should ex­pect such re­search to con­tinue to pay large and un­ex­pected div­i­dends, even ig­nor­ing its im­pact on AI Safety.

IV

I be­lieve it is always im­por­tant to use strate­gies that are co­op­er­a­tive and in­for­ma­tion cre­at­ing, rather than defect­ing and in­for­ma­tion de­stroy­ing, and that pre­serve good in­cen­tives for all in­volved. If we’re not us­ing a de­ci­sion al­gorithm that cares more about such con­sid­er­a­tions than max­i­miz­ing rev­enue raised, even when rais­ing for a cause as good as ‘not de­stroy­ing all value in the uni­verse,’ it will not end well.

This means that I need to do three things. I need to share my in­for­ma­tion, as best I can. I need to in­clude my own bi­ases, so oth­ers can de­cide whether and how much to ad­just for them. And I need to avoid us­ing strate­gies that would be dis­tort or mis­lead.

I have not been able to share all my in­for­ma­tion above, due to a com­bi­na­tion of space, com­plex­ity and con­fi­den­tial­ity con­sid­er­a­tions. I have done what I can. Beyond that, I will sim­ply say that what re­main­ing pri­vate in­for­ma­tion I have on net points in the di­rec­tion of MIRI be­ing a bet­ter place to donate money.

My own bi­ases here are clear. The ma­jor­ity of my friends come from the ra­tio­nal­ity com­mu­nity, which would not ex­ist ex­cept for Eliezer Yud­kowsky. I met my wife Laura at a com­mu­nity meetup. I know sev­eral MIRI mem­bers per­son­ally, con­sider them friends, and even ran a strat­egy meet­ing for them sev­eral years back at their re­quest. It would not be sur­pris­ing if such con­sid­er­a­tions in­fluenced my judg­ment some­what. Such con­cerns go hand in hand with be­ing in a po­si­tion to do ex­ten­sive anal­y­sis and ac­quire pri­vate in­for­ma­tion. This is all the more rea­son to do your own think­ing and anal­y­sis of these is­sues.

To avoid dis­tor­tions, I am giv­ing the money di­rectly, with­out qual­ifi­ca­tions or gim­micks or match­ing funds. My hope is that this will be a costly sig­nal that I have thought long and hard about such ques­tions, and reached the con­clu­sion that MIRI is an ex­cel­lent place to donate money. OpenPhil has a prin­ci­ple that they will not fund more than half of any or­ga­ni­za­tion’s bud­get. I think this is an ex­cel­lent prin­ci­ple. There is more than enough money in the effec­tive al­tru­ist com­mu­nity to fully fund MIRI and other such wor­thy causes, but these funds rep­re­sent a great temp­ta­tion. They risk caus­ing great dis­tor­tions, and ty­ing up ac­tion with poli­ti­cal con­sid­er­a­tions, de­spite ev­ery­one’s best in­ten­tions.

As small givers (at least, rel­a­tive to some) our biggest value lies not in the use of the money it­self, but in the in­for­ma­tion value of the costly sig­nal our dona­tions give and in the virtues we cul­ti­vate in our­selves by giv­ing. I be­lieve MIRI can effi­ciently uti­lize far more money than it cur­rently has, but more than that this is me say­ing that I know them, I know their work, and I be­lieve in and trust them. I vouch for MIRI.