[LINK] How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects

If you’re interested in evolution, anthropics, and AI timelines—or in what the Singularity Institute has been producing lately—you might want to check out this new paper, by SingInst research fellow Carl Shulman and FHI professor Nick Bostrom.

The paper:

How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects

The abstract:

Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers will see intelligent life having arisen on their planet no matter how hard it is for intelligent life to evolve on any given Earth-like planet. We explore how the evolutionary argument might be salvaged from this objection, using a variety of considerations from observation selection theory and analysis of specific timing features and instances of convergent evolution in the terrestrial evolutionary record. We find that a probabilistic version of the evolutionary argument emerges largely intact once appropriate corrections have been made.

I’d be interested to hear LW-ers’ takes on the content; Carl, too, would much appreciate feedback.