Arguments about fast takeoff

Link post

I ex­pect “slow take­off,” which we could op­er­a­tional­ize as the econ­omy dou­bling over some 4 year in­ter­val be­fore it dou­bles over any 1 year in­ter­val. Lots of peo­ple in the AI safety com­mu­nity have strongly op­pos­ing views, and it seems like a re­ally im­por­tant and in­trigu­ing dis­agree­ment. I feel like I don’t re­ally un­der­stand the fast take­off view.

(Below is a short post copied from Face­book. The link con­tains a more sub­stan­tive dis­cus­sion. See also: AI im­pacts on the same topic.)

I be­lieve that the dis­agree­ment is mostly about what hap­pens be­fore we build pow­er­ful AGI. I think that weaker AI sys­tems will already have rad­i­cally trans­formed the world, while I be­lieve fast take­off pro­po­nents think there are fac­tors that makes weak AI sys­tems rad­i­cally less use­ful. This is strate­gi­cally rele­vant be­cause I’m imag­in­ing AGI strate­gies play­ing out in a world where ev­ery­thing is already go­ing crazy, while other peo­ple are imag­in­ing AGI strate­gies play­ing out in a world that looks kind of like 2018 ex­cept that some­one is about to get a de­ci­sive strate­gic ad­van­tage.

Here is my cur­rent take on the state of the ar­gu­ment:

The ba­sic case for slow take­off is: “it’s eas­ier to build a crap­pier ver­sion of some­thing” + “a crap­pier AGI would have al­most as big an im­pact.” This ba­sic ar­gu­ment seems to have a great his­tor­i­cal track record, with nu­clear weapons the biggest ex­cep­tion.

On the other side there are a bunch of ar­gu­ments for fast take­off, ex­plain­ing why the case for slow take­off doesn’t work. If those ar­gu­ments were any­where near as strong as the ar­gu­ments for “nukes will be dis­con­tin­u­ous” I’d be pretty per­suaded, but I don’t yet find any of them con­vinc­ing.

I think the best ar­gu­ment is the his­tor­i­cal anal­ogy to hu­mans vs. chimps. If the “crap­pier AGI” was like a chimp, then it wouldn’t be very use­ful and we’d prob­a­bly see a fast take­off. I think this is a weak anal­ogy, be­cause the dis­con­tin­u­ous progress dur­ing evolu­tion oc­curred on a met­ric that evolu­tion wasn’t re­ally op­ti­miz­ing: groups of hu­mans can rad­i­cally out­com­pete groups of chimps, but (a) that’s al­most a flukey side-effect of the in­di­vi­d­ual benefits that evolu­tion is ac­tu­ally se­lect­ing on, (b) be­cause evolu­tion op­ti­mizes my­opi­cally, it doesn’t bother to op­ti­mize chimps for things like “abil­ity to make sci­en­tific progress” even if in fact that would ul­ti­mately im­prove chimp fit­ness. When we build AGI we will be op­ti­miz­ing the chimp-equiv­a­lent-AI for use­ful­ness, and it will look noth­ing like an ac­tual chimp (in fact it would al­most cer­tainly be enough to get a de­ci­sive strate­gic ad­van­tage if in­tro­duced to the world of 2018).

In the linked post I dis­cuss a bunch of other ar­gu­ments: peo­ple won’t be try­ing to build AGI (I don’t be­lieve it), AGI de­pends on some se­cret sauce (why?), AGI will im­prove rad­i­cally af­ter cross­ing some uni­ver­sal­ity thresh­old (I think we’ll cross it way be­fore AGI is trans­for­ma­tive), un­der­stand­ing is in­her­ently dis­con­tin­u­ous (why?), AGI will be much faster to de­ploy than AI (but a crap­pier AGI will have an in­ter­me­di­ate de­ploy­ment time), AGI will re­cur­sively im­prove it­self (but the crap­pier AGI will re­cur­sively im­prove it­self more slowly), and scal­ing up a trained model will in­tro­duce a dis­con­ti­nu­ity (but be­fore that some­one will train a crap­pier model).

I think that I don’t yet un­der­stand the core ar­gu­ments/​in­tu­itions for fast take­off, and in par­tic­u­lar I sus­pect that they aren’t on my list or aren’t ar­tic­u­lated cor­rectly. I am very in­ter­ested in get­ting a clearer un­der­stand­ing of the ar­gu­ments or in­tu­itions in fa­vor of fast take­off, and of where the rele­vant in­tu­itions come from /​ why we should trust them.