Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)