Cynical explanations of FAI critics (including myself)

Re­lated Posts: A cyn­i­cal ex­pla­na­tion for why ra­tio­nal­ists worry about FAI, A be­lief prop­a­ga­tion graph

Lately I’ve been pon­der­ing the fact that while there are many crit­ics of SIAI and its plan to form a team to build FAI, few of us seem to agree on what SIAI or we should do in­stead. Here are some of the al­ter­na­tive sug­ges­tions offered so far:

  • work on com­puter security

  • work to im­prove laws and institutions

  • work on mind uploading

  • work on in­tel­li­gence amplification

  • work on non-au­tonomous AI (e.g., Or­a­cle AI, “Tool AI”, au­to­mated for­mal rea­son­ing sys­tems, etc.)

  • work on aca­dem­i­cally “main­stream” AGI ap­proaches or trust that those re­searchers know what they are doing

  • stop wor­ry­ing about the Sin­gu­lar­ity and work on more mun­dane goals

Given that ideal rea­son­ers are not sup­posed to dis­agree, it seems likely that most if not all of these al­ter­na­tive sug­ges­tions can also be ex­plained by their pro­po­nents be­ing less than ra­tio­nal. Look­ing at my­self and my sug­ges­tion to work on IA or up­load­ing, I’ve no­ticed that I have a ten­dency to be ini­tially over-op­ti­mistic about some tech­nol­ogy and then be­come grad­u­ally more pes­simistic as I learn more de­tails about it, so that I end up be­ing more op­ti­mistic about tech­nolo­gies that I’m less fa­mil­iar with than the ones that I’ve stud­ied in de­tail. (Another ex­am­ple of this is me be­ing ini­tially en­am­oured with Cypher­punk ideas and then giv­ing up on them af­ter in­vent­ing some key pieces of the nec­es­sary tech­nol­ogy and see­ing in more de­tail how it would ac­tu­ally have to work.)
I’ll skip giv­ing ex­pla­na­tions for other crit­ics to avoid offend­ing them, but it shouldn’t be too hard for the reader to come up with their own ex­pla­na­tions. It seems that I can’t trust any of the FAI crit­ics, in­clud­ing my­self, nor do I think Eliezer and com­pany are much bet­ter at rea­son­ing or in­tu­it­ing their way to a cor­rect con­clu­sion about how we should face the ap­par­ent threat and op­por­tu­nity that is the Sin­gu­lar­ity. What use­ful im­pli­ca­tions can I draw from this? I don’t know, but it seems like it can’t hurt to pose the ques­tion to LessWrong.