Nick Bostrom’s TED talk and setting priorities

I just watched Nick Bostrom’s TED talk ti­tled “Hu­man­ity’s biggest prob­lems aren’t what you think they are.” I was ex­pect­ing a talk giv­ing Bostrom’s take on what he thought the three biggest ex­is­ten­tial (or at least catas­trophic) risks are, but in­stead, “ex­is­ten­tial risk” was just one item on the list. The other two were “death” and “life isn’t usu­ally as won­der­ful as it could be.”

Put­ting these other two in the same cat­e­gory as “ex­is­ten­tial risk” seems like a mis­take. This seems es­pe­cially ob­vi­ous in the case of (the pre­sent, nor­mal rate of) death and ex­is­ten­tial risk. Bostrom’s talk gives an an­nual death rate of 56 mil­lion, whereas if you take fu­ture gen­er­a­tions into ac­count, a 1% re­duc­tion in ex­is­ten­tial risk could save 10^32 lives.

More im­por­tantly, if we screw up solv­ing “death” and “life isn’t usu­ally as won­der­ful as it could be” in the next cen­tury, there will be other cen­turies where we can solve them. On the other hand, if we screw up ex­is­ten­tial risk in the next cen­tury, it means that’s it, hu­man­ity’s run will be over. There are no sec­ond chances when it comes to avert­ing ex­is­ten­tial risk.

One pos­si­ble counter ar­gu­ment is that the sooner we solve “death” and “life isn’t usu­ally as won­der­ful as it could be,” the sooner we can start spread­ing our utopia through­out the galaxy and even to other galax­ies, and with ex­po­nen­tial growth a cen­tury head start on that could lead to a manyfold in­crease in the num­ber of utils in the his­tory of the uni­verse.

How­ever, given the difficul­ties of build­ing probes that travel at even a sig­nifi­cant frac­tion of the speed of light, and the fact that coloniz­ing new star sys­tems may be a slow pro­cess even with ad­vanced nan­otech, a cen­tury may not mat­ter much when it comes to coloniz­ing the galaxy. Fur­ther­more, coloniz­ing the galaxy (or uni­verse) may not be the sort of thing that fol­lows an ex­po­nen­tial curve, it may fol­low a cu­bic curve as probes spread out in a sphere.

So I lean to­wards think­ing that avert­ing ex­is­ten­tial risks should be a much higher pri­or­ity than cre­at­ing a death-free, always won­der­ful utopia. Or maybe not. Either way, the an­swer would seem to be very im­por­tant for ques­tions like how we should fo­cus our re­sources, and also whether your should push the but­ton to turn on a ma­chine that will allegedly cre­ate a utopia.