A case for strategy research: what it is and why we need more of it

Authors: Siebe Rozen­dal, Justin Shov­e­lain, David Kristoffersson

Cross­posted to the Effec­tive Altru­ism Forum

Overview

To achieve any am­bi­tious goal, some strate­gic anal­y­sis is nec­es­sary. Effec­tive al­tru­ism has am­bi­tious goals and fo­cuses heav­ily on do­ing re­search. To un­der­stand how to best al­lo­cate our time and re­sources, we need to clar­ify what our op­tions in re­search are. In this ar­ti­cle, we de­scribe strat­egy re­search and re­late it to val­ues re­search, tac­tics re­search, in­form­ing re­search, and im­prove­ment re­search. We then ap­ply the lens of strat­egy re­search to ex­is­ten­tial risk re­duc­tion, a ma­jor cause area of effec­tive al­tru­ism. We pro­pose a model in which the marginal value of a re­search type de­pends strongly on the ma­tu­rity of the re­search field. Fi­nally, we ar­gue that strat­egy re­search should cur­rently be given higher pri­or­ity than other re­search in ex­is­ten­tial risk re­duc­tion be­cause of the sig­nifi­cant amount of strate­gic un­cer­tainty, and we provide spe­cific recom­men­da­tions for differ­ent ac­tors.

Introduction

Effec­tive al­tru­ism is reg­u­larly framed as “figur­ing out how to do the most good, and then do­ing it.” How­ever, figur­ing out how to do the most good is not easy. Differ­ent groups reach differ­ent con­clu­sions. So how do we figure out how to do the most good?

Quite ob­vi­ously, the first step is to figure out our val­ues. We need to know what we roughly mean by ‘the most good.’ How­ever, once our moral un­cer­tainty is sig­nifi­cantly diminished, what is the next step in figur­ing out how to do the most good? We be­lieve the next step should be strat­egy re­search: high-level re­search on how to best achieve a high-level goal. A brief case was made for strate­gic anal­y­sis by Nick Bostrom in Su­per­in­tel­li­gence (p. 317):

“Against a back­drop of per­plex­ity and un­cer­tainty, [strate­gic] anal­y­sis stands out as be­ing of par­tic­u­larly high ex­pected value. Illu­mi­na­tion of our strate­gic situ­a­tion would help us tar­get sub­se­quent in­ter­ven­tions more effec­tively. Strate­gic anal­y­sis is es­pe­cially need­ful when we are rad­i­cally un­cer­tain not just about some de­tail of some periph­eral mat­ter but about the car­di­nal qual­ities of the cen­tral things. For many key pa­ram­e­ters, we are rad­i­cally un­cer­tain even about their sign…”

This was writ­ten in the con­text of ex­is­ten­tial risk from AI. We be­lieve it ap­plies to ex­is­ten­tial risks in gen­eral, and that strat­egy re­search should be a core part of other effec­tive al­tru­ism ar­eas as well. How­ever, this leaves many open ques­tions. What is strat­egy re­search? How does strat­egy re­search re­late to other types of re­search? When should strat­egy re­search be pri­ori­tized and when should it not be? Th­ese ques­tions are rele­vant to po­ten­tial and cur­rent re­searchers, re­search man­agers, and fun­ders. The an­swers are nec­es­sary to al­lo­cate re­sources effec­tively. This ar­ti­cle also pro­vides mo­ti­va­tion for the found­ing of the ex­is­ten­tial risk strat­egy re­search or­ga­ni­za­tion Con­ver­gence. Con­ver­gence will be pub­lish­ing more strate­gic analy­ses go­ing for­ward. This ar­ti­cle rep­re­sents our cur­rent best (and some­what sim­plified) un­der­stand­ing of the con­cepts out­lined. Be­cause we strive to clar­ify ba­sic con­cepts and ar­gu­ments, we have left out some of the finer de­tails and com­plex­ities. We in­tend to fur­ther dis­en­tan­gle, clar­ify, and de­velop the ideas in the fu­ture. Fur­ther­more, the un­der­ly­ing ideas pre­sented here gen­er­al­ize to other fields, but some fields are in a differ­ent stage than ex­is­ten­tial risk re­duc­tion is and there­fore need differ­ent re­search pri­ori­ties.

To clar­ify what we are ar­gu­ing for, we first de­scribe strat­egy re­search and re­late it to other types of re­search. We then ar­gue that strat­egy re­search is im­por­tant for re­duc­ing ex­is­ten­tial risk. We pro­pose that the marginal value of strat­egy re­search de­pends on the ma­tu­rity of the re­search field. We con­clude that the cur­rent im­ma­tu­rity of the ex­is­ten­tial risk re­search field makes fur­ther strat­egy re­search highly valuable.

What is strat­egy re­search?

Strat­egy re­search seems in­tu­itively valuable. But what is it about more pre­cisely? Un­der­stand­ing this and the differ­ent op­tions in re­search will help us make good de­ci­sions about how to al­lo­cate our re­sources and how to di­rect our re­search efforts. In this sec­tion, we po­si­tion strat­egy re­search within a frame­work of differ­ent re­search types in effec­tive al­tru­ism, we give an ex­plicit defi­ni­tion, and we dis­t­in­guish our terms from other com­monly used terms.

Five classes of effec­tive al­tru­ism research

To put strat­egy re­search in con­text to other types of re­search, we have de­vel­oped a clas­sifi­ca­tion of differ­ent re­search types. Nat­u­rally, the clas­sifi­ca­tion is a sim­plifi­ca­tion and re­search will of­ten not fit neatly into a sin­gle cat­e­gory.

The re­search spine of effec­tive al­tru­ism: three levels

We can ap­proach ‘figur­ing out what to do’ at three differ­ent lev­els of di­rect­ness (which are in­spired by the same kind of goal hi­er­ar­chy as the Values-to-Ac­tions Chain). Most in­di­rectly, we can ask ‘what should we value?’ We call that val­ues re­search, which is roughly the same as ethics. From our val­ues, we can de­rive a high-level goal to strive for. For longter­mism val­ues, such a goal could be min­i­mize ex­is­ten­tial risk.[1] For an­other set of val­ues , such as an­i­mal-in­clu­sive neart­er­mism, the high-level goal could be to min­i­mize the ag­gre­gate suffer­ing of farm an­i­mals.[2]

More di­rectly, we can ask ‘given our goal, how can we best achieve it?’ We call the re­search to an­swer that ques­tion strat­egy re­search. The re­sult of strat­egy re­search is a num­ber of strate­gic goals em­bed­ded in a strate­gic plan. For ex­am­ple, in ex­is­ten­tial risk re­duc­tion, strat­egy re­search could de­ter­mine how to best al­lo­cate re­sources be­tween re­duc­ing var­i­ous ex­is­ten­tial risks based on their rel­a­tive risk lev­els and timelines.

Most di­rectly, we can ask ‘given our strate­gic plan, how should we ex­e­cute it?’ We call the re­search to an­swer that ques­tion tac­tics re­search. Tac­tics re­search is similar to strat­egy re­search, but is at a more di­rect level. This makes tac­tics more spe­cific. For ex­am­ple, in ex­is­ten­tial risk re­duc­tion, tac­tics re­search could be tak­ing one of the sub goals from a strate­gic plan, say ‘re­duce the com­pet­i­tive dy­nam­ics sur­round­ing hu­man-level AI’, and ask a spe­cific ques­tion that deals with part of the is­sue: ‘How can we foster trust and co­op­er­a­tion be­tween the US and Chi­nese gov­ern­ments on AI de­vel­op­ment?’ In gen­eral, less di­rect ques­tions have more widely rele­vant an­swers, but they also provide less spe­cific recom­men­da­tions for ac­tions to take.

Fi­nally, the plans can be im­ple­mented based on the in­sights from the three re­search lev­els.

Each level of re­search re­quires some in­puts, which it then pro­cesses to pro­duce some out­puts for the more di­rect level of re­search. For ex­am­ple, strat­egy re­search re­quires a goal or value to strive for, and this needs to be in­formed by moral philos­o­phy.[3] When strat­egy re­search is skipped, tac­tics re­search and im­ple­men­ta­tion are only driven by im­plicit mod­els. For ex­am­ple, a naive and im­plicit model is ‘when some­thing seems im­por­tant, try to per­suade in­fluen­tial peo­ple of that.’ Act­ing on such a model can do harm. In emerg­ing re­search fields, im­plicit mod­els are of­ten wrong be­cause they have re­ceived less thought and have not been ex­posed to feed­back. For tac­tics re­search and im­ple­men­ta­tion to be effec­tive, they should of­ten be driven by a well-in­formed and thought­fully crafted strat­egy.

The bound­ary be­tween strat­egy and tac­tics is grad­ual rather than sharp. Thus, some re­search ques­tions fall some­where in be­tween. Note as well that im­ple­men­ta­tion is very sim­plified here; it refers to a host of ac­tions. Im­ple­men­ta­tion can be ‘do­ing more re­search’, but it can also be ‘try­ing to change opinions of key stake­hold­ers’ or ‘build­ing up re­search ca­pac­ity.’

A spine is not suffi­cient: in­form­ing and im­prove­ment research

You could say that these lev­els form a spine: they cre­ate a cen­tral struc­ture that sup­ports and struc­tures the rest of the nec­es­sary build­ing blocks. For in­stance, strate­gic clar­ity makes in­for­ma­tion more use­ful by giv­ing it a struc­ture to fit into. To illus­trate this, imag­ine learn­ing a piece of in­for­ma­tion about an im­proved method of gene writ­ing. Without any strate­gic un­der­stand­ing, it would just be an amor­phous piece of in­for­ma­tion; it would not be clear how learn­ing it should af­fect your ac­tions. How­ever, with more strate­gic clar­ity it would be more clear how this new method could af­fect im­por­tant pa­ram­e­ters, the pos­si­ble con­se­quences of that, and how one should best re­act to it.

Still, a spine is not a com­plete body; it needs ad­di­tional build­ing blocks. Strate­gic clar­ity can not be achieved with­out be­ing suffi­ciently in­formed about the state of the world, or with­out un­der­stand­ing how to effec­tively con­duct re­search in a do­main.

There­fore, in ad­di­tion to the re­search lev­els, we also iden­tify two ad­di­tional re­search classes:[4] in­form­ing re­search and im­prove­ment re­search. In­form­ing re­search mostly con­cerns ques­tions about what the world is like. They can be very im­por­tant ques­tions, and sci­ence has built an enor­mous trove of such knowl­edge that effec­tive al­tru­ism can draw from. Im­prove­ment re­search helps to im­prove other types of re­search by iden­ti­fy­ing im­por­tant con­sid­er­a­tions, by im­prov­ing ex­ist­ing re­search meth­ods, and by iden­ti­fy­ing use­ful mod­els from other fields. Philos­o­phy of sci­ence, episte­mol­ogy, math­e­mat­ics, eco­nomics, and com­puter sci­ence can all be used for im­prove­ment re­search. For ex­am­ple, im­prove­ment re­search fo­cused on ethics could dis­cuss the role that in­tu­itions should have in the method­ol­ogy of moral philos­o­phy.

A defi­ni­tion of strat­egy research

Based on the model of the re­search classes above, we will for­mu­late a defi­ni­tion of strat­egy re­search. We want a defi­ni­tion that is sim­ple and cap­tures the core pur­pose of strat­egy re­search. Strat­egy re­search is an im­pre­cise con­cept, so the defi­ni­tion should re­flect that. We also want the term to cor­re­spond to how peo­ple have used it in the past. For these rea­sons, we pro­pose the fol­low­ing defi­ni­tion for strat­egy re­search:

   High-level re­search on how to best achieve a high-level goal.

Thus, the cen­tral strat­egy ques­tion is “how to best achieve our high-level goal?” And to achieve a goal, you im­plic­itly or ex­plic­itly need to form and act on plans. The challenge of strat­egy re­search is to figure out the best plans: those that best achieve a par­tic­u­lar high-level goal given the ex­ist­ing con­straints. To figure out the best plans, a lot of differ­ent ac­tivi­ties are nec­es­sary. It re­quires, among oth­ers, un­der­stand­ing which parts of the world are rele­vant for mak­ing plans, what ac­tions lead to what con­se­quences, how to com­pose ac­tions into plans, and how to pri­ori­tize be­tween plans.

This means that, in or­der to figure out the best plans, strat­egy re­search will in­volve a sub­stan­tial amount of in­form­ing re­search, as well as im­prove­ment re­search. For ex­am­ple, if you have a model of how differ­ent risk lev­els and timelines should af­fect re­source al­lo­ca­tion, you also need to know what the differ­ent risk lev­els and timelines are (i.e. in­form­ing re­search) in or­der to form a com­pre­hen­sive strate­gic plan. This re­search is high-level be­cause it is fo­cused on plans to achieve a high-level goal. In con­trast, re­search on figur­ing out one’s val­ues is top-level, and re­search on how to best achieve a tac­ti­cal goal is low-level.[5]

How do other re­search terms in effec­tive al­tru­ism re­late to this frame­work?

In effec­tive al­tru­ism, there have been many terms used for differ­ent types of re­search. Each or­ga­ni­za­tion uses a term slightly differ­ently, and it is difficult to find pre­cise defi­ni­tions of these terms. Let’s briefly con­sider some re­search terms in effec­tive al­tru­ism that re­late to strat­egy re­search.

Cause pri­ori­ti­za­tion, pri­ori­ti­za­tion re­search, global pri­ori­ties research

Th­ese three terms have been used in­ter­change­ably to de­scribe roughly similar types of re­search: pri­ori­ti­za­tion be­tween and within cause ar­eas.[6] Pri­ori­ti­za­tion be­tween cause ar­eas over­laps sig­nifi­cantly with val­ues re­search, al­though in prac­tice it of­ten does not deal with the more fun­da­men­tal is­sues in ethics. Pri­ori­ti­za­tion within cause ar­eas over­laps sig­nifi­cantly with strat­egy re­search.

Macro-strat­egy research

This term is mostly used by FHI, and seems to re­fer to un­cov­er­ing cru­cial con­sid­er­a­tions with re­gard to im­prov­ing the long-term fu­ture. Cru­cial con­sid­er­a­tions can “rad­i­cally change the ex­pected value of pur­su­ing some high-level sub goal.”[7] A high-level sub goal refers here to things like “in­crease eco­nomic progress” or “de­crease fund­ing into AGI re­search”. The in­ten­tion ap­pears to fo­cus on the higher-level ques­tions of strat­egy re­search (hence “macro”) al­though FHI also clas­sifies their pa­per on the unilat­er­al­ist’s curse as macro-strat­egy. That con­cept does not seem to be a cru­cial con­sid­er­a­tion, but a strate­gic con­sid­er­a­tion for mul­ti­ple ex­is­ten­tial risks.

AI strategy

As the term has been used in effec­tive al­tru­ism, AI strat­egy re­search is sim­ply strat­egy re­search fo­cused on re­duc­ing ex­is­ten­tial risk from AI speci­fi­cally.[8]

Char­ity evaluation

A num­ber of or­ga­ni­za­tions eval­u­ate in­ter­ven­tions and char­i­ties, or se­lect char­i­ties to donate to (e.g. GiveWell, An­i­mal Char­ity Eval­u­a­tors, Open Philan­thropy Pro­ject, Founders Pledge, Re­think Pri­ori­ties). Although we would not clas­sify char­ity eval­u­a­tion it­self as strat­egy re­search, it heav­ily re­lies on strate­gic views and many of the men­tioned or­ga­ni­za­tions perform some kind of strat­egy re­search. For an ex­am­ple for neart­er­mism hu­man-cen­tric val­ues, we would call GiveWell’s re­search to iden­tify their pri­or­ity pro­grams strat­egy re­search, and would call their eval­u­a­tion of char­i­ties tac­tics or tac­tics-in­form­ing re­search.

Why strat­egy re­search is im­por­tant to re­duce ex­is­ten­tial risk

Be­cause of strate­gic un­cer­tainty, we be­lieve that more strat­egy re­search is cur­rently par­tic­u­larly im­por­tant for re­duc­ing ex­is­ten­tial risk. In this sec­tion, we give our main rea­sons and sup­port them with a model in which the value of a re­search class de­pends on the ma­tu­rity of the field. We then note some other con­sid­er­a­tions that af­fect the im­por­tance of strat­egy re­search and dis­cuss how strat­egy re­search could do harm.

The cur­rent stage of ex­is­ten­tial risk re­search makes strat­egy re­search valuable

Strat­egy re­search makes the most sense when (1) a com­mu­nity knows roughly what it wants (e.g. re­duce ex­is­ten­tial risk), when (2) it is un­likely that this goal will un­dergo sub­stan­tial changes from fur­ther re­search or re­flec­tion on val­ues, and (3) when the field has not yet reached strate­gic clar­ity. Strate­gic un­cer­tainty is the stage where the ex­pected value of strat­egy re­search is the high­est. It is in be­tween the stages of value un­cer­tainty and strate­gic clar­ity.

Here we ar­gue that do­ing strat­egy re­search should be a high pri­or­ity be­cause it is cur­rently un­clear how to best achieve ex­is­ten­tial risk re­duc­tion. Strate­gic un­cer­tainty means that we are un­cer­tain which ac­tions are (in ex­pec­ta­tion) valuable, which are in­signifi­cant, and which are harm­ful. This im­plies that there is valuable in­for­ma­tion to be gained.

We are cur­rently strate­gi­cally uncertain

To see whether we are ac­tu­ally strate­gi­cally un­cer­tain, we can ask what strate­gic clar­ity would look like. The fur­ther we are away from that ideal, the more strate­gi­cally un­cer­tain we are. With strate­gic clar­ity we would know what to do. Speci­fi­cally, we would know...

  • who the rele­vant ac­tors are

  • what ac­tions are available to use

  • how the fu­ture might de­velop from those actions

  • what good se­quences of ac­tions (plans) are

  • how to best pri­ori­tize plans

  • that we have not missed any im­por­tant considerations

We cur­rently have only a ba­sic un­der­stand­ing of each of these in ex­is­ten­tial risk re­duc­tion. The claim that we are strate­gi­cally un­cer­tain in the field of ex­is­ten­tial risk seems widely shared. For ex­am­ple, it is echoed in this post by Car­rick Flynn, and again in Su­per­in­tel­li­gence (p. 317).

Strate­gic un­cer­tainty im­plies there is in­for­ma­tion to be gained

The cost of strat­egy re­search is only worth it if it sig­nifi­cantly im­proves our un­der­stand­ing of which ac­tions are (in ex­pec­ta­tion) valuable, which are in­signifi­cant, and which are harm­ful. Use­ful in­for­ma­tion has been gained in the past by un­cov­er­ing cru­cial con­sid­er­a­tions that had a mas­sive in­fluence on our cur­rent pri­ori­ties and plans. Th­ese in­clude the sep­a­rate re­al­iza­tions that AI and syn­thetic biol­ogy might be ex­is­ten­tial risks. More cru­cial con­sid­er­a­tions could be un­cov­ered by strat­egy re­search. In ad­di­tion, there are many cur­rent open ques­tions to which differ­ent an­swers would im­ply sub­stan­tially differ­ent pri­ori­ties. Ex­am­ples in­clude ‘how widely is ex­is­ten­tial risk dis­tributed over differ­ent pos­si­ble causes?’, ‘when would an AI take­off hap­pen?’, and ‘how likely is hu­man civ­i­liza­tion to re­cover af­ter col­lapse?’. There is still sub­stan­tial dis­agree­ment on these ques­tions, and progress on these ques­tions would re­duce our strate­gic un­cer­tainty.

In ad­di­tion, the in­for­ma­tion needs to be ac­quirable for a rea­son­able amount of effort. Strat­egy re­search would not be valuable if it was com­pletely in­tractable. We be­lieve some ac­tors and at­tempts at strat­egy re­search can suc­ceed, but it is hard to pre­dict suc­cess be­fore­hand.

Strate­gic un­cer­tainty im­plies that in­ter­act­ing with the ‘en­vi­ron­ment’ has a re­duced net value of information

In­ter­act­ing with one’s en­vi­ron­ment can be highly in­for­ma­tive. In­ter­act­ing with a com­plex sys­tem of­ten yields a sub­stan­tial amount of in­for­ma­tion that can­not be ob­tained by out­side ob­ser­va­tion. For ex­am­ple, it is hard to as­sess how re­cep­tive policy mak­ers are to­wards ex­is­ten­tial risk re­duc­tion with­out en­gag­ing with them. In­ter­act­ing with them would al­low effi­cient learn­ing about the do­main.

How­ever, this in­for­ma­tion comes with a risk. Strate­gic un­cer­tainty also im­plies that tac­ti­cal recom­men­da­tions and di­rect im­ple­men­ta­tions can be harm­ful. For ex­am­ple, ap­proach­ing the wrong policy mak­ers or ap­proach­ing them in the wrong way can re­duce the chance for ex­is­ten­tial risk to be taken se­ri­ously by gov­ern­ments. Tak­ing un­in­formed ac­tion to re­duce ex­is­ten­tial risk may back­fire catas­troph­i­cally in hard-to-re­verse and hard-to-pre­dict ways. This re­duces the net value of that ac­tion.[9]

In con­trast, strat­egy re­search is less likely to di­rectly cause harm be­cause it gives gen­eral and im­pre­cise recom­men­da­tions. This means they are less likely to be wrong and that they are fur­ther away from im­ple­men­ta­tion, which al­lows for more op­por­tu­ni­ties to cor­rect mis­takes. Strat­egy re­search is also self-cor­rect­ing: it can change its fo­cus and method based on its own gen­er­ated in­sights; part of strat­egy re­search is to an­a­lyze whether we should con­tinue do­ing strat­egy re­search.

A model of re­search value as a func­tion of a field’s maturity

We have ar­gued that we are cur­rently strate­gi­cally un­cer­tain with re­spect to ex­is­ten­tial risk re­duc­tion and that this im­plies that strat­egy re­search is high pri­or­ity. How­ever, we can make a more com­plex model than “first solve val­ues, then solve strat­egy, then solve tac­tics, then im­ple­ment plans”. In prac­tice, re­sources (e.g. cap­i­tal and labour) are spread over mul­ti­ple lev­els of re­search and re­sources be­come spe­cial­ized. The op­ti­mal al­lo­ca­tion of marginal re­sources de­pends on the cur­rent state of knowl­edge.

We pro­pose a model in which the cu­mu­la­tive value of re­search lev­els (i.e. val­ues, strat­egy, and tac­tics re­search) fol­lows s-curves. S-curves are de­scribed as “fun­da­men­tal pat­terns that ex­ist in many sys­tems that have pos­i­tive feed­back loops and con­straints. The curve speeds up due to the pos­i­tive feed­back loop, then slows down due to the con­straints.” In this sec­tion, we de­scribe the differ­ent con­straints and the pos­i­tive feed­back loop that cre­ates the s-shaped curve we ex­pect the value of a re­search level to ex­hibit.

Early phase: con­straints need to be addressed

When re­search on a par­tic­u­lar level (e.g. strat­egy re­search) in a par­tic­u­lar field (e.g. x-risk re­duc­tion) is just get­ting started, we ex­pect progress to be slowed down by two con­straints. The first con­straint is a lack of clar­ity on the higher level. For in­stance, it is not valuable to try to figure out a good strat­egy when you are un­cer­tain about your val­ues, be­cause you are much more likely to work on ques­tions that turn out to be not very rele­vant to your val­ues. The first con­straint should be ad­dressed at the higher level.

The sec­ond con­straint is that do­ing early re­search in a field is hard. There is not yet an es­tab­lished paradigm; the prob­lems are messy, en­tan­gled, and vague, rather than struc­tured, in­de­pen­dent, and clear. What is needed in an early stage is dis­en­tan­gle­ment—struc­tur­ing the re­search field, iden­ti­fy­ing the cen­tral ques­tions, and clar­ify­ing con­cepts. This con­straint can­not be ad­dressed by re­search at a higher level (re­solv­ing moral un­cer­tainty does not help us any fur­ther in our strate­gic un­cer­tainty). Con­se­quently, it needs to be ad­dressed head-on, which means that progress will be slow at first.

Mid­dle phase: pos­i­tive feed­back loops cre­ate ex­po­nen­tial growth

The mid­dle phase starts when the con­straints be­come weaker. An­swers to higher-level ques­tions nar­row down the range of rele­vant ques­tions at the lower level. Gen­er­ally, we ex­pect that a higher pro­por­tion of re­search pro­jects pro­duce value, be­cause ir­rele­vant ques­tions can be bet­ter iden­ti­fied be­fore­hand. Fur­ther­more, as the field be­comes more struc­tured, each suc­cess­ful piece of re­search tends to iden­tify mul­ti­ple new and com­pel­ling re­search ques­tions. This is a pe­riod of ex­po­nen­tial growth.

Late phase: new con­straints arise

The late phase starts when new con­straints arise. One con­straint is that the big ques­tions have ei­ther been solved or have been found in­tractable. The re­main­ing ques­tions will be ei­ther con­cep­tu­ally hard, will re­quire in­for­ma­tion that is not (yet) available, or will be lower-level ques­tions. At this point, the lower re­search level has pro­gressed through its own early phase, and the marginal value of do­ing re­search at a lower re­search level sur­passes the value of do­ing re­search at the cur­rent level.

In sum­mary, as our in­sight pro­gresses, the marginal value of re­search shifts to­wards lower-level ques­tions. A good heuris­tic in this model is to ‘do re­search at the high­est level that is most sen­si­tive to new in­for­ma­tion’.

Im­pli­ca­tions of the model

First, this model does not im­ply that, at any point in time, we should in­vest all re­sources into a sin­gle level of re­search. Rather, it sug­gests where to spend our marginal effort, which de­pends on the stage we are in. It is of­ten use­ful to keep some re­sources in an ear­lier type, be­cause those re­sources have be­come spe­cial­ized and may be in their best po­si­tion. For ex­am­ple, moral philoso­phers who be­lieve in longter­mism and ex­is­ten­tial risk re­duc­tion may want to keep work­ing on moral philos­o­phy to im­prove the rigour of the ar­gu­ments and po­ten­tially un­cover new (though most likely more minor) con­sid­er­a­tions. Fur­ther­more, in­sights down the line might give rise to new ques­tions higher up, so we should main­tain some ca­pac­ity to an­swer these ques­tions.

Se­cond, even if most of the marginal ex­pected value to­day lies within strat­egy re­search, it would be use­ful to in­vest some marginal re­sources into tac­tics re­search and even some into im­ple­men­ta­tion. There might be some easy-to-un­cover tac­ti­cal in­sights ap­pli­ca­ble to a wide range of strate­gic plans, try­ing out some tac­tics re­search might illu­mi­nate some strate­gic un­cer­tain­ties, and build­ing the ca­pac­ity to do tac­tics re­search al­lows for a faster re­sponse to strate­gic in­sight.

Third, the model as­sumes that re­search at each level also in­volves im­prove­ment and in­form­ing re­search. How­ever, this does not mean that im­prove­ment, strat­egy, and strat­egy-in­form­ing re­search are equally rep­re­sented in each phase. It is pos­si­ble that early re­search in­volves more im­prove­ment than in­form­ing re­search or vice versa, but it is un­clear what is more likely.

This model also ad­dresses a com­mon crit­i­cism that the effec­tive al­tru­ism com­mu­nity fre­quently re­ceives, namely that the com­mu­nity spends so much time think­ing, dis­cussing, and do­ing re­search, and so lit­tle about tak­ing ac­tion. (This crit­i­cism is not com­pletely off-mark: there is pro­duc­tive dis­cus­sion and un­pro­duc­tive dis­cus­sion.) It is tempt­ing to re­ply by point­ing out all the things the effec­tive al­tru­ism com­mu­nity has achieved: moved money to effec­tive char­i­ties, set up new or­gani­sa­tions, et cetera. How­ever, we can also give an­other an­swer based on this model: “Yes, cur­rently we are still fo­cus­ing on re­search. But we are pro­gress­ing at what seems to be the ap­pro­pri­ate speed and we will in­crease the amount of im­ple­men­ta­tion we do as we gain more clar­ity.”

Other con­sid­er­a­tions that af­fect the value of strat­egy research

We be­lieve the rea­sons in the pre­vi­ous sec­tion provide enough sup­port for the claim that strat­egy re­search should be highly pri­ori­tized. How­ever, there are ad­di­tional im­por­tant con­sid­er­a­tions that af­fect the strength of our claim. We be­lieve they pose im­por­tant ques­tions, but have sig­nifi­cant un­cer­tainty about them. An­a­lyz­ing these con­sid­er­a­tions and pro­vid­ing ev­i­dence for them is be­yond the scope of this ar­ti­cle. We wel­come fur­ther dis­cus­sion on these points.

How much time is there for strate­gic in­sights to com­pound or ma­ture into im­ple­men­ta­tion?

Be­fore a ro­bustly good strat­egy can be im­ple­mented, mod­els need to be cre­ated and re­fined and cru­cial con­sid­er­a­tions need to be un­cov­ered. This means that strat­egy re­search needs enough time to pay off.

The higher one’s cre­dence is that we will en­counter an ex­is­ten­tially risky event soon—such as the in­ven­tion of hu­man-level AI—the more likely it is that act­ing on our cur­rent best guess for han­dling ex­is­ten­tial risk is bet­ter than sys­tem­at­i­cally cre­at­ing a top-down strat­egy.

How­ever, we (the au­thors) are sig­nifi­cantly un­cer­tain about the timelines of var­i­ous ex­is­ten­tial risks, es­pe­cially of AI. There­fore we are re­luc­tant to act as if timelines are short. Such short-term ac­tions (e.g. rais­ing the alarm with­out nu­ance, or try­ing to build a ca­pa­ble and rep­utable re­search field rapidly) of­ten seem costly or are harm­ful in the long-term. In ad­di­tion, many promis­ing strate­gies can only af­fect ex­is­ten­tial risk on a medium or long timeframe. Even dis­counted by the prob­a­bil­ity that there is not enough time for them to be im­pact­ful, strate­gies with medium to long timeframe prob­a­bly have a high ex­pected value.

How likely are the strate­gic in­sights to af­fect con­crete ac­tions and the en­vi­ron­ment?

In­for­ma­tion is only valuable if it even­tu­ally af­fects the world. It is pos­si­ble that there is already enough ac­tion­able strate­gic knowl­edge available, but that only a few peo­ple are will­ing and able to act on it. In such a case, re­sources would be bet­ter spent on lob­by­ing in­fluen­tial peo­ple so they make bet­ter de­ci­sions for the fu­ture of hu­man­ity, or on in­creas­ing the in­fluence of peo­ple who are ex­pected to make good de­ci­sions for the fu­ture of hu­man­ity.

We be­lieve it is hard to as­sess how likely in­sights are to af­fect other ac­tors. Lob­by­ing in­fluen­tial peo­ple and coal­i­tion build­ing could be the best ac­tion for some peo­ple. In ad­di­tion, in­fluence and coal­i­tion build­ing may take decades, which would im­ply that early ac­tion on this front is valuable. Nonethe­less, some strat­egy re­search also takes a long time to fruition.

How likely is it that there are hard-to-re­verse de­vel­op­ments that re­quire im­me­di­ate ac­tion?

Some­times it is nec­es­sary to act on in­suffi­cient in­for­ma­tion, even if we would pre­fer to do more strate­gic anal­y­sis. Our hands may be forced by other ac­tors that are about to take hard-to-re­verse ac­tions, such as im­ple­ment­ing pre­ma­ture na­tional AI poli­cies. New poli­cies by ma­jor ac­tors could sig­nifi­cantly limit the range of pos­si­ble and de­sir­able strate­gies in the fu­ture if these poli­cies are im­ple­mented pre­ma­turely or with a lack of nu­ance. In cases where key de­ci­sion mak­ers can­not be per­suaded to ex­er­cise ‘strate­gic re­straint’, it may be benefi­cial to step in and do ‘dam­age con­trol’ even if ev­ery­thing would have been bet­ter if no one had moved early.

We be­lieve that some hard-to-re­verse ac­tions are in fact be­ing taken, but only some ac­tors could find good op­por­tu­ni­ties to effec­tively ad­vo­cate strate­gic re­straint or do ‘dam­age con­trol’. Some could even cre­ate good con­di­tions for fur­ther (strate­gic) ac­tion.

How could strat­egy re­search do harm?

Just like for ev­ery other pro­ject, it’s im­por­tant to con­sider the pos­si­bil­ity of do­ing harm. We iden­tify the fol­low­ing three im­por­tant ways strat­egy re­search might do harm.

Strat­egy re­search may carry in­for­ma­tion haz­ards. Some knowl­edge may be dan­ger­ous to dis­cover, and some knowl­edge may be dan­ger­ous if it spreads to the wrong peo­ple. In map­ping pos­si­ble ex­is­ten­tial risks, strat­egy re­search may un­cover new ways for hu­mans to risk ex­is­ten­tial catas­tro­phe. Shar­ing those pos­si­ble risks could make them more likely to oc­cur by in­spiring mal­i­cious or care­less ac­tors. Another in­for­ma­tion haz­ard is when plans be­come known to ac­tors with con­flict­ing (in­stru­men­tal) goals, which al­lows them to frus­trate those plans. Some goals are more likely to con­flict with other agents’ goals than oth­ers. We gen­er­ally recom­mend against pub­li­cly iden­ti­fy­ing these con­flicts, un­less the other party is definitely already aware of you and your plans.

Strat­egy re­search may cre­ate strate­gic con­fu­sion. Badly ex­e­cuted or com­mu­ni­cated re­search could con­fuse, rather than illu­mi­nate, im­por­tant ac­tors. Creat­ing bad re­search makes it more difficult to find good re­search. Fur­ther­more, strat­egy re­search could over­state the amount of strate­gic un­cer­tainty and thereby ex­ces­sively limit the be­hav­ior of care­ful ac­tors while less care­ful ac­tors could take the lead.

Strat­egy re­search may waste re­sources. It is hard to as­sess the ex­pected value of spe­cific strat­egy re­search pro­jects, even af­ter they have been com­pleted, be­cause it is difficult to trace con­se­quences back to spe­cific re­search pro­jects. Even if strat­egy re­search is not worse than in­ac­tion, re­sources like money and tal­ent still carry op­por­tu­nity costs: they might have been used bet­ter el­se­where. We be­lieve it is very likely that a num­ber of pro­jects are a waste of re­sources in this sense. This waste can be re­duced by effec­tive feed­back loops, such as the eval­u­a­tion of re­search or­ga­ni­za­tions (like this one).

Discussion

The goal of this ar­ti­cle was to de­scribe strat­egy re­search more clearly and to ar­gue that it should cur­rently be given a high pri­or­ity in the field of ex­is­ten­tial risk re­duc­tion. This ar­ti­cle has in­tro­duced some terms and mod­els that can in­crease our col­lec­tive un­der­stand­ing of differ­ent re­search classes, as well as provide in­put for fruit­ful dis­cus­sion. Based on our model, we pro­posed the heuris­tic to ‘do re­search at the high­est level that is most sen­si­tive to new in­for­ma­tion’. We be­lieve that strat­egy re­search is cur­rently this high­est level in the field of ex­is­ten­tial risk re­duc­tion.

Recommendations

Our main recom­men­da­tion is to ex­pand the ex­is­ten­tial risk strat­egy field. We would like to see more strat­egy re­search from both ex­ist­ing and new ac­tors in the field. What fol­lows are some recom­men­da­tions for par­tic­u­lar groups. We en­courage read­ers to come up with other im­pli­ca­tions.

Re­searchers: ex­plore the big pic­ture and share strate­gic con­sid­er­a­tions[10]

We recom­mend cur­rent ex­is­ten­tial risk re­searchers to grap­ple with the ques­tions of how their re­search fo­cus fits within the larger pic­ture. We es­pe­cially en­courage re­searchers to share their strate­gic in­sights and con­sid­er­a­tions in write ups and blog posts, un­less they pose in­for­ma­tion haz­ards. We be­lieve most re­searchers have some im­plicit mod­els which, when writ­ten up, would not meet the stan­dards for aca­demic pub­li­ca­tion. How­ever, shar­ing them will al­low these mod­els to be built upon and im­proved by the com­mu­nity. This will also make it eas­ier for out­siders, such as donors and as­piring re­searchers, to un­der­stand the cru­cial con­sid­er­a­tions within the field.

Re­search or­ga­ni­za­tions: in­cen­tivize researchers

Re­search or­ga­ni­za­tions should in­cen­tivize re­searchers to ex­plore do­ing strat­egy re­search and to write their ideas and find­ings up in pub­lic venues, even if those are pro­vi­sional ideas and there­fore do not meet the stan­dards for aca­demic pub­li­ca­tion.

Donors: in­crease fund­ing for ex­is­ten­tial risk strategy

We en­courage donors to ex­plore op­por­tu­ni­ties to fund new ex­is­ten­tial risk strat­egy or­ga­ni­za­tions, as well as op­por­tu­ni­ties within ex­ist­ing or­ga­ni­za­tions to do more strat­egy re­search. Given the new­ness of the re­search field and given that there are few es­tab­lished re­searchers, we be­lieve this is cur­rently a space to ap­ply hits-based giv­ing. Not all pro­jects will pay off, but those that do will make a big differ­ence. As fun­ders learn and the field ma­tures, we ex­pect strat­egy re­search to be­come ‘safer bets’.

Effec­tive al­tru­ists: learn, sup­port, start

For those that as­pire to move into ex­is­ten­tial risk strat­egy re­search, we recom­mend ex­plor­ing one’s fit by do­ing an in­tern­ship with a strat­egy or­ga­ni­za­tion or writ­ing and shar­ing a sim­ple model of a strat­egy-re­lated topic. Peo­ple with op­er­a­tions skills can make a large im­pact by sup­port­ing ex­ist­ing strat­egy re­search, or even start­ing up a new or­ga­ni­za­tion, since we be­lieve there is enough room for more ex­is­ten­tial risk strat­egy or­ga­ni­za­tions.

Limi­ta­tions & fur­ther research

We have sim­plified a num­ber of points in this ar­ti­cle, and it con­tains a num­ber of gaps that should be ad­dressed in fur­ther re­search.

Fo­cused on ba­sics → elab­o­rate on the de­tails of strat­egy research

We have strived to make the ba­sics of strat­egy re­search clear, but many de­tails have been left out. Fur­ther re­search could delve deeper into the differ­ent parts of strat­egy re­search to as­sess what they are, which parts are most valuable, and to ex­am­ine how to do effec­tive strat­egy re­search. This re­search could also dis­en­tan­gle the differ­ence be­tween ‘nar­row’ and ’broad” strat­egy re­search we al­lude to in foot­note 4.

Fo­cused on x-risk → as­sess the need for strat­egy re­search in other areas

This ar­ti­cle, be­cause it is writ­ten by Con­ver­gence, fo­cuses on ex­is­ten­tial risk strat­egy. How­ever, we could also have cho­sen to fo­cus on effec­tive al­tru­ism strat­egy, longter­mism strat­egy, or AI strat­egy. Fur­ther re­search could ap­proach the strate­gic ques­tion for a wider, nar­rower, or oth­er­wise differ­ent high-level goal. For ex­am­ple, it ap­pears that both com­mu­nity build­ing and an­i­mal welfare would benefit greatly from more strat­egy re­search.

In­com­plete risk anal­y­sis → re­search how strat­egy re­search can do harm

We have only briefly dis­cussed how strat­egy re­search can do harm, and have ar­gued that it is less likely to do harm be­cause it is more in­di­rect. Fur­ther re­search could in­ves­ti­gate this claim fur­ther and draft guidelines to re­duce the risk of harm­ful strat­egy re­search.

Conclusion

This ar­ti­cle has ex­plained, in part, why we be­lieve strat­egy re­search is im­por­tant and ne­glected. We hope it con­tributes to­wards strate­gic clar­ity for im­por­tant goals such as re­duc­ing ex­is­ten­tial risk. Fi­nally, we hope this ar­ti­cle mo­ti­vates other re­search groups, as well as donors and other effec­tive al­tru­ists, to fo­cus more on strat­egy re­search.

Acknowledgements

This post was writ­ten by Siebe Rozen­dal as a Re­search As­sis­tant for Con­ver­gence in col­lab­o­ra­tion with Justin Shov­e­lain, who pro­vided many of the ideas, and David Kristoffers­son, who did a lot of ed­i­to­rial work. We are es­pe­cially grate­ful for the thor­ough feed­back from Ben Harack, and also want to thank Tam Borine and Rem­melt Ellen for their use­ful in­put.


  1. Other high-level goals for longter­mism have also been sug­gested, such as Beck­stead’s “make path-de­pen­dent as­pects of the far fu­ture go as well as pos­si­ble.” ↩︎

  2. In­ter­est­ingly, an­i­mal-in­clu­sive neart­er­mism val­ues do not have a clear analogue goal to ‘min­i­mize x-risk’. We un­der­stand the fo­cus on farm an­i­mals might not be the op­ti­mal goal, be­cause it ex­cludes suffer­ing of non-farm an­i­mals. ↩︎

  3. Ac­tors do not nec­es­sar­ily need to share the same val­ues to have the same high-level goals. For ex­am­ple, many cause ar­eas would benefit from an effec­tive al­tru­ism com­mu­nity that is healthy, full of ca­pa­ble peo­ple, and strate­gi­cally wise. ↩︎

  4. Re­search of­ten falls un­der mul­ti­ple of these classes at the same time. For in­stance, re­search into how to build pru­dent na­tional AI poli­cies may be highly in­form­ing to strat­egy re­search (im­por­tant to high-level strat­egy) and tac­ti­cal (im­por­tant to tac­ti­cal ques­tions of policy mak­ing) at the same time. Fur­ther, if a re­searcher is figur­ing out im­por­tant im­prove­ment and in­form­ing is­sues for strat­egy, isn’t that strat­egy re­search? We be­lieve it is; we pre­fer a “broad” defi­ni­tion of strat­egy re­search. In con­trast, a “nar­row” defi­ni­tion of strat­egy re­search would re­fer only to pure ques­tions of strat­egy con­struc­tion. We think there are some im­por­tant dis­tinc­tions and trade­offs here that we hope to illu­mi­nate in fur­ther work. ↩︎

  5. That some­thing is low-level does not mean it is not high qual­ity, or not im­por­tant. The level refers to the level of di­rect­ness: how closely it in­forms ac­tion. ↩︎

  6. Whether some re­search is be­tween or within a cause area de­pends on how a ‘cause area’ is defined. How­ever, just like the term ‘pri­ori­ti­za­tion re­search’, differ­ent peo­ple use the term ‘cause area’ differ­ently. In this ar­ti­cle, we re­gard ‘ex­is­ten­tial risk re­duc­tion’ as a sin­gle cause area. ↩︎

  7. Bostrom (2014). ‘Cru­cial Con­sid­er­a­tions and Wise Philan­thropy.’ ↩︎

  8. AI gov­er­nance and AI policy are two re­lated terms. Pos­si­bly, AI policy maps to AI-risk spe­cific tac­tics re­search and AI gov­er­nance maps to the com­bi­na­tion of AI strat­egy and AI policy, but we are un­cer­tain about this clas­sifi­ca­tion. We also ad­vise against the use of the term ‘AI tac­tics re­search’ as it may sound ad­ver­sar­ial/​mil­i­tary-like. ↩︎

  9. Ac­tions dur­ing strate­gic un­cer­tainty can be harm­ful, but try­ing to take ac­tion could still provide use­ful in­for­ma­tion. This is a good rea­son to fo­cus cur­rent AI policy on the near- and medium-term; those poli­cies will still yield a good (though smaller) amount of in­for­ma­tion while car­ry­ing sig­nifi­cantly lower risk of do­ing long-term harm. ↩︎

  10. Allan Dafoe, di­rec­tor of the Cen­tre for the Gover­nance of AI, has a differ­ent take: “Some prob­lems are more im­por­tant than oth­ers. How­ever, we are suffi­ciently un­cer­tain about what are the core prob­lems that need to be solved that are pre­cise enough and mod­u­lar enough that they can be re­ally fo­cused on that I would recom­mend a differ­ent ap­proach. Rather than try to find re­ally the high­est-lev­er­age, most-ne­glected prob­lem, I would ad­vise peo­ple in­ter­ested in work­ing in this space to get a feel for the re­search land­scape.” ↩︎