I’m also vulnerable to ideas that seem like they could lead to gaining infinite computing power in finite time. Being a bounded agent means I care only finitely much about infinite utility, but I still look into lots of ways that one could get infinite computing power that I’m sure most people would ignore outright.
I’m not sure what it means to be vulnerable to time-wasting Doom memes. I spend at the very most six hours a day really researching the possibility, probability, and survivability of Doom. Most days I spend 2 hours. I guess I could spend that time learning to play the piano or summat, but that’d feel kinda weak by comparison. And I have all those other hours to learn how to play piano and paint and cook and be awesome at everything. And on top of it seemingly being an extremely good use of my time, it’s fun for me as a nerd to be on the forefront of certain kinds of metaphysics and decision theory research.
The kind of Doom memes I take seriously are the ones that seem the most probable, of course. uFAI for instance seems really damn probable. The heuristics I use are the ones I outline in my post above about how to take ideas seriously. If I run an idea through those heuristics, and throw the kitchen sink of Less Wrong rationality techniques at it, then I start to take it rather seriously.
I didn’t mean to imply that all such thoughts are a waste, or that any of the usual worries around here are silly. I meant that if you really feel obligated to take seriously claims of alarming differences in utility, that you’d end up wasting a time digging through ridiculous religious claims. Clearly it’s not the case that you do this.
Hm, I wonder how many atheists have taken Pascal’s wager seriously. If I’m not confident of the flaws of majoritarianism then failing to Aumann update on the testimony of a billion Christians would seem to be a bad idea. And if I think that the belief of a billion Christians is even small evidence that a Christian god is more likely to control most of the measure of computations that include myself than any other god then the atheist-god wager argument doesn’t save me from having to disregard a possibility for infinite utility. But perhaps I forget the stronger arguments against Pascal’s wager. At any rate, you’re right that I don’t go around looking for ridiculous religious claims to worry about, but I’m at least willing to take Pascal’s wager a little bit seriously. (Failing to do so can also lead to falling into the Pascal’s wager fallacy fallacy.)
You don’t just have to worry about one specific atheist-god, but also any jealous gods, any singular god that would consider beliefs about a singular god beliefs about themself, and feel insulted by being thought to be like what JHWH is supposed to be like, any god that punishes giving in to imagined blackmail (hell) just to make blackmail less likely, and so on. These aren’t symmetric because e. g. anti-jealous gods that reward worship of very different gods, including one particular very jealous god, seem less likely than jealous gods.
Hmuh, I’d never exactly thought of thinking about YHWH as a blackmailing simulator AI, but in an ensemble universe that description seems to fit. That’s pretty funny. :)
Agreed—this is the usual response, and the one that works for me if I can’t quite muster up the confidence to say “0% probability for infinite-torture JHWH (or variation)”. I guess you can justify something like p=0 with a combination of: “you haven’t defined what you mean by JHWH sufficiently for me to agree or disagree”, “ok, you’ve told me enough that I see JHWH as a logical impossibility”. Once a hypothetical god passes those bars, then you need recourse to all the possible god hypotheses. Priveleging the Hypothesis is a finite-scale version of the same objection.
I’m also vulnerable to ideas that seem like they could lead to gaining infinite computing power in finite time. Being a bounded agent means I care only finitely much about infinite utility, but I still look into lots of ways that one could get infinite computing power that I’m sure most people would ignore outright.
I’m not sure what it means to be vulnerable to time-wasting Doom memes. I spend at the very most six hours a day really researching the possibility, probability, and survivability of Doom. Most days I spend 2 hours. I guess I could spend that time learning to play the piano or summat, but that’d feel kinda weak by comparison. And I have all those other hours to learn how to play piano and paint and cook and be awesome at everything. And on top of it seemingly being an extremely good use of my time, it’s fun for me as a nerd to be on the forefront of certain kinds of metaphysics and decision theory research.
The kind of Doom memes I take seriously are the ones that seem the most probable, of course. uFAI for instance seems really damn probable. The heuristics I use are the ones I outline in my post above about how to take ideas seriously. If I run an idea through those heuristics, and throw the kitchen sink of Less Wrong rationality techniques at it, then I start to take it rather seriously.
I didn’t mean to imply that all such thoughts are a waste, or that any of the usual worries around here are silly. I meant that if you really feel obligated to take seriously claims of alarming differences in utility, that you’d end up wasting a time digging through ridiculous religious claims. Clearly it’s not the case that you do this.
Hm, I wonder how many atheists have taken Pascal’s wager seriously. If I’m not confident of the flaws of majoritarianism then failing to Aumann update on the testimony of a billion Christians would seem to be a bad idea. And if I think that the belief of a billion Christians is even small evidence that a Christian god is more likely to control most of the measure of computations that include myself than any other god then the atheist-god wager argument doesn’t save me from having to disregard a possibility for infinite utility. But perhaps I forget the stronger arguments against Pascal’s wager. At any rate, you’re right that I don’t go around looking for ridiculous religious claims to worry about, but I’m at least willing to take Pascal’s wager a little bit seriously. (Failing to do so can also lead to falling into the Pascal’s wager fallacy fallacy.)
You don’t just have to worry about one specific atheist-god, but also any jealous gods, any singular god that would consider beliefs about a singular god beliefs about themself, and feel insulted by being thought to be like what JHWH is supposed to be like, any god that punishes giving in to imagined blackmail (hell) just to make blackmail less likely, and so on. These aren’t symmetric because e. g. anti-jealous gods that reward worship of very different gods, including one particular very jealous god, seem less likely than jealous gods.
Hmuh, I’d never exactly thought of thinking about YHWH as a blackmailing simulator AI, but in an ensemble universe that description seems to fit. That’s pretty funny. :)
Agreed—this is the usual response, and the one that works for me if I can’t quite muster up the confidence to say “0% probability for infinite-torture JHWH (or variation)”. I guess you can justify something like p=0 with a combination of: “you haven’t defined what you mean by JHWH sufficiently for me to agree or disagree”, “ok, you’ve told me enough that I see JHWH as a logical impossibility”. Once a hypothetical god passes those bars, then you need recourse to all the possible god hypotheses. Priveleging the Hypothesis is a finite-scale version of the same objection.