I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)