But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
To a non-scientifically-literate person, I might say that I think electrons exist as material objects, whereas to a physicist I would invoke Tegmark’s idea that all that exist are mathematical structures.
One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn’t heard about Bostrom/Tegmark yet is sitting between you and your listener, and you’re just using atheism as a convenient landmark while trying to point your listener in your general direction.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence.
Why do you say that? I don’t think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas… (Umm, I guess some people had nightmares after hearing about Roko’s idea, but still, it doesn’t seem that bad overall.)
One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn’t heard about Bostrom/Tegmark yet is sitting between you and your listener, and you’re just using atheism as a convenient landmark while trying to point your listener in your general direction.
The listener in this case being a theist you’re trying to explain your epistemic position to, I assume. (It took me a moment to figure out the context.)
I don’t think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas..
My gut feeling is the causal flow goes “manic depression → suicide, alternate universes” rather than “alternate universes → manic depression → suicide”.
Honestly, I wouldn’t be that sure. On this very site I’ve seen people say their reason for signing up for cryonics was their belief in MWI.
It would not surprise me if “suicide → hell” decreases the overall number of suicides and “suicide → anthropic principle leaves you in other universes” increases the overall number of suicides.
Cryonics is reasonable—Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others… such as my future selves in forward branching multi-verses.
My comments on the subject (having cut out the tree debating MWI) can be found here.
Why do you say that? I don’t think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas… (Umm, I guess some people had nightmares after hearing about Roko’s idea, but still, it doesn’t seem that bad overall.)
I meant that a lot of arguments about what kinds of objectives a creator god might have, for example, would be very tricky to do right, with lots of appeals to difficult-to-explain Occamian intuitions. Maybe this is me engaging in typical mind fallacy though, and others would not have this problem. People going crazy is a whole other problem. Currently people don’t think very hard about cosmology or decision theory or what not. I think this might be a good thing, considering how crazy the Roko thing was.
I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our “Occamian intuitions”, does not seem like a good use of time. Do you agree?
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you’re doing philosophical reasoning, whereas the sort of thing I’m talking about in my post doesn’t imply a goal of understanding how we’re trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I’m not really saying anything new here, I know—most of Less Wrong is about applying cognitive science to philosophy.)
As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.
I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool’s endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.
To a non-scientifically-literate person, I might say that I think electrons exist as material objects, whereas to a physicist I would invoke Tegmark’s idea that all that exist are mathematical structures.
One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn’t heard about Bostrom/Tegmark yet is sitting between you and your listener, and you’re just using atheism as a convenient landmark while trying to point your listener in your general direction.
Why do you say that? I don’t think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas… (Umm, I guess some people had nightmares after hearing about Roko’s idea, but still, it doesn’t seem that bad overall.)
The listener in this case being a theist you’re trying to explain your epistemic position to, I assume. (It took me a moment to figure out the context.)
Possibly related: “(Hugh) Everett’s daughter, Elizabeth, suffered from manic depression and committed suicide in 1996 (saying in her suicide note that she was going to a parallel universe to be with her father” (via rwallace).
My gut feeling is the causal flow goes “manic depression → suicide, alternate universes” rather than “alternate universes → manic depression → suicide”.
Honestly, I wouldn’t be that sure. On this very site I’ve seen people say their reason for signing up for cryonics was their belief in MWI.
It would not surprise me if “suicide → hell” decreases the overall number of suicides and “suicide → anthropic principle leaves you in other universes” increases the overall number of suicides.
Really? What’s the reasoning there (if you remember)?
The post is here. The reasoning as written is:
My comments on the subject (having cut out the tree debating MWI) can be found here.
I meant that a lot of arguments about what kinds of objectives a creator god might have, for example, would be very tricky to do right, with lots of appeals to difficult-to-explain Occamian intuitions. Maybe this is me engaging in typical mind fallacy though, and others would not have this problem. People going crazy is a whole other problem. Currently people don’t think very hard about cosmology or decision theory or what not. I think this might be a good thing, considering how crazy the Roko thing was.
I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our “Occamian intuitions”, does not seem like a good use of time. Do you agree?
It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you’re doing philosophical reasoning, whereas the sort of thing I’m talking about in my post doesn’t imply a goal of understanding how we’re trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I’m not really saying anything new here, I know—most of Less Wrong is about applying cognitive science to philosophy.)
As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.
I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool’s endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.