John Baez Interviews with Eliezer (Parts 2 and 3)

John Baez’s This Week’s Finds (Week 311) [Part 1; added for convenience following Nancy Lebovitz’s comment]

John Baez’s This Week’s Finds (Week 312)

John Baez’s This Week’s Finds (Week 313)

I really like Eliezer’s response to John Baez’s last question in Week 313 about environmentalism vs. AI risks. I think it satisfactorily deflects much of the concern that I had when I wrote The Importance of Self-Doubt.

Eliezer says

Anyway: In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves.

This is true as stated but ignores an important issue which is there is feedback between more mundane current events and the eventual potential extinction of the humane race. For example, the United States’ involvement in Libya has a (small) influence on existential risk (I don’t have an opinion as to what sort). Any impact on human society impact due to global warming has some influence on existential risk.

Eliezer’s points about comparative advantage and of existential risk in principle dominating all other considerations are valid, important, and well-made, but passing from principle to practice is very murky in the complex human world that we live in.

Note also the points that I make in Friendly AI Research and Taskification.