Engaging Seriously with Short Timelines

It seems like trans­for­ma­tive AI might be com­ing fairly soon. By trans­for­ma­tive AI I just mean AI that will rapidly ac­cel­er­ate eco­nomic and tech­nolog­i­cal progress. Of course, I am not rul­ing out a true sin­gu­lar­ity ei­ther. I am as­sum­ing such tech­nol­ogy can be cre­ated us­ing var­i­ants of cur­rent deep learn­ing tech­niques.

Paul Chris­ti­ano has writ­ten up ar­gu­ments for a ‘slow take­off’ where “There will be a com­plete 4-year in­ter­val in which world out­put dou­bles, be­fore the first 1-year in­ter­val in which world out­put dou­bles.”. It is un­clear to me whether that is more or less likely than a rapid and sur­pris­ing sin­gu­lar­ity. But it cer­tainly seems much eas­ier to pre­pare for. I don’t think we have a good model of what ex­actly will hap­pen but we should pre­pare for as many winnable sce­nar­ios as we can.

What should we do now if think big changes are com­ing soon? Here are some ideas:

Work on quickly use­able AI safety the­ory: Iter­ated Am­plifi­ca­tion and Distil­la­tion—As­sum­ing timelines are short we might not have time for prov­ably safe AI. We need ai-safety the­ory that can be ap­plied quickly to neu­ral nets. Any tech­niques that can quickly be used to al­ign GPT-style AI are very high value. If you have the abil­ity, work on them now.

IDA is a good frame­work to bet on imo. OpenAI seems to be bet­ting on IDA. Here is an ex­pla­na­tion. Here is a less­wrong dis­cus­sion. If you are math­e­mat­i­cally in­clined and un­der­stand the ba­sics of deep learn­ing now might be a great time to read the IDA pa­pers and see if you can con­tribute.

Get cap­i­tal while you can—Money is broadly use­ful and can be quickly con­verted into other re­sources in a crit­i­cal mo­ment. At the very least money can be con­verted into time. Be fru­gal, you might need your re­sources soon.

Be­sides, the value of hu­man cap­i­tal might fall. If you have a lu­cra­tive po­si­tion (ex: fi­nance or tech) now is a good time to fo­cus on mak­ing money. The value of hu­man cap­i­tal might fall. In­vest­ing in your hu­man cap­i­tal by go­ing back to school is a bad idea.

In­vest Cap­i­tal in com­pa­nies that will benefit from AI tech­nol­ogy—Tech stocks are already ex­pen­sive so great deals will be hard to find. But if things get crazy you want your cap­i­tal to grow rapidly. I would es­pe­cially recom­mend hedg­ing ‘trans­for­ma­tive aI’ if you will get rich any­way if noth­ing crazy hap­pens.

I am do­ing some­thing like the fol­low­ing port­fo­lio:

ARKQ − 27%
Botz − 9%
Microsoft − 9%
Ama­zon − 9%
Alpha­bet − 8% (ARKQ is ~4% alpha­bet)

Face­book − 7%
Ten­cent − 6%
Baidu − 6%
Ap­ple − 5%
IBM − 4%

Tesla − 0 (ArkQ is 10% Tesla)
Nvidia − 2% (both Botz and ARKQ hold Nvidia)
In­tel − 3%
Sales­force − 2%
Twilio − 1.5%
Alteryx − 1.5%

BOTZ and ARKQ are ETFs. They have pretty high ex­pense ra­tios. You can repli­cate them if you want to save 68-75 ba­sis points. Botz is pretty easy to repli­cate with only ~10K.

Sev­eral peo­ple think that land will re­main valuable in many sce­nar­ios. But I don’t see a good way to op­er­a­tional­ize a bet on land. Some peo­ple have sug­gested buy­ing op­tions since it is eas­ier to get lev­er­age and the up­side is higher. But get­ting the timing right seems tricky to me. But if you think you can time things, buy op­tions.

Phys­i­cal and Emo­tional Prepa­ra­tion—You don’t want your body or mind to fail you dur­ing the crit­i­cal pe­riod. In­vest in keep­ing your­self as healthy as pos­si­ble. If you have is­sues with RSI work on fix­ing them now so you can give fu­ture de­vel­op­ments your full at­ten­tion.

You can also in­vest in men­tal prepa­ra­tion. Med­i­ta­tion is high value for many peo­ple. A sys­tem­atic study of ra­tio­nal­ity tech­niques could be use­ful. But keep in mind that it is easy to waste time if you ca­su­ally ap­proach train­ing. Track your re­sults and have a sys­tem!

In gen­eral, you want to make these in­vest­ments now while you still have time. Keep in mind these in­vest­ments may con­flict with at­tempts to in­crease your mon­e­tary cap­i­tal. I would pri­ori­tize keep­ing your­self healthy. Make sure you are get­ting good re­turns on more spec­u­la­tive in­vest­ments (and re­mem­ber many self-im­prove­ment plans fail).

Poli­ti­cal Or­ga­niz­ing and In­fluence—Tech­nolog­i­cal progress does not in­trin­si­cally help peo­ple. Cur­rent tech­nol­ogy can be used for good ends. But they can also be used to con­trol peo­ple on a huge scale. One can in­ter­pret the rise of hu­man­ity as sin­gu­lar­ity 1.0. By the stan­dards of the pre­vi­ous eras change ac­cel­er­ated a huge amount. ‘Sin­gu­lar­ity 1.0’ did not go so well for the an­i­mals in fac­tory farms. Even if al­ign AI, we need to make the right choices or sin­gu­lar­ity 2.0 might not go so well for most in­hab­itants of the Earth.

In a slow take­off, hu­man gov­ern­ments are likely to be huge play­ers. As Mil­ton Fried­man said, “Only a crisis—ac­tual or per­ceived—pro­duces real change”. If there is a crisis com­ing there may be large poli­ti­cal changes com­ing soon. In­fluenc­ing these changes might be of high value. Poli­tics can be in­fluenced from both the out­side and the in­side. Given the poli­ti­cal situ­a­tion, I find it un­likely an AI arms race can be averted for too long. But var­i­ous sorts of in­ter­gov­ern­men­tal co­op­er­a­tion might be pos­si­ble and in­creas­ing the odds of these deals could be high value.

Ca­pa­bil­ities Re­search—This is a sketchy and rather pes­simistic idea. But imag­ine that GPT-3 has already trig­gered an arms race or at least that GPT-4 will. In this case, it might make sense to help a rel­a­tively val­ues-al­igned or­ga­ni­za­tion win (such as OpenAI as op­posed to the CCP). If you are, or could be, very tal­ented at deep learn­ing you might have to grap­ple with this op­tion.

What ideas do other peo­ple have for deal­ing with short timelines?

Cross posted from my blog: Short Timelines