I don’t think Nick made a good case as to why these movements / belief systems deserve to be call “scams” and more importantly deserve to be ignored (in favor of “spending more time learning about what has actually happened in the real world”). The fact that certain hopes and threats share certain properties (which Nick numbered 1-3 in his post) is unfortunate, but I didn’t find any convincing arguments in his post showing why these hopes and threats should therefore be ignored.
(My overall position, which I’ll repeat in case anyone is curious and doesn’t want to dig up my past comments, is that Pascal’s Wager is a difficult unsolved philosophy problem, whose solution probably requires a great deal of progress in decision theory (e.g., how to handle logical uncertainty and deal with running on unreliable hardware) and ethics (e.g., how much utility should I really assign to certain outcomes). In the face of such philosophical uncertainty, my instinct is to spend a portion of my resources on the philosophical problem itself, a portion on the most promising “wager”, and the rest on other more mundane priorities. I don’t have a good argument for why this is the right (or rational) thing to do, but neither do I see any good arguments against it.)
I don’t think Nick made a good case as to why these movements / belief systems deserve to be call “scams” and more importantly deserve to be ignored (in favor of “spending more time learning about what has actually happened in the real world”). The fact that certain hopes and threats share certain properties (which Nick numbered 1-3 in his post) is unfortunate, but I didn’t find any convincing arguments in his post showing why these hopes and threats should therefore be ignored.
(My overall position, which I’ll repeat in case anyone is curious and doesn’t want to dig up my past comments, is that Pascal’s Wager is a difficult unsolved philosophy problem, whose solution probably requires a great deal of progress in decision theory (e.g., how to handle logical uncertainty and deal with running on unreliable hardware) and ethics (e.g., how much utility should I really assign to certain outcomes). In the face of such philosophical uncertainty, my instinct is to spend a portion of my resources on the philosophical problem itself, a portion on the most promising “wager”, and the rest on other more mundane priorities. I don’t have a good argument for why this is the right (or rational) thing to do, but neither do I see any good arguments against it.)