Robin Hanson on the futurist focus on AI

Link post

Robin Han­son

Robert Long and I re­cently talked to Robin Han­son—GMU economist, pro­lific blog­ger, and long­time thinker on the fu­ture of AI—about the amount of fu­tur­ist effort go­ing into think­ing about AI risk.

It was note­wor­thy to me that Robin thinks hu­man-level AI is a cen­tury, per­haps mul­ti­ple cen­turies away— much longer than the 50-year num­ber given by AI re­searchers. I think these longer timelines are the source of a lot of his dis­agree­ment with the AI risk com­mu­nity about how much of fu­tur­ist thought should be put into AI.

Robin is par­tic­u­larly in­ter­ested in the no­tion of ‘lump­iness’– how much AI is likely to be fur­thered by a few big im­prove­ments as op­posed to a slow and steady trickle of progress. If, as Robin be­lieves, most aca­demic progress and AI in par­tic­u­lar are not likely to be ‘lumpy’, he thinks we shouldn’t think things will hap­pen with­out a lot of warn­ing.

The full record­ing and tran­script of our con­ver­sa­tion can be found here.