I don’t think aversion to religion is a good way to summarize the LessWrong discourse around religion. Rationalists generally have complex intellectual positions.
(1) Beyond the Reach of God lays out Eliezers positions from more than a decade ago. One of the points it makes is that most conceptions of religion suggest that we don’t live in a world that just gets wiped out by an x-risk for no good reason and that this would be bad.
If you look at some conceptions of Buddism it might be seen as good to wipe out all sentient life from the earth through a big asteroid because that would end the wheel of suffering. If you believe in Christianity, then a god that created the earth with a purpose wouldn’t allow humanity just to be randomly wiped out.
Even if you grant that religion is compatible with living a normal life, in many forms, it’s unhelpful for thinking straight about x-risk.
(2) More recently you had a conflict around spiritual practice where some argue that a significant portion of community members who engaged in a lot of spiritual practice ended up epistemically worse. I don’t think it’s accurate to describe LessWrong as being united in that aspect.
While that debate is related there are a lot of different things going on and it’s not a purely intellectual debate but one that’s about empiric reality of what happened with people after they engaged in certain practices.
Religious doomsday cults are a kind of standard meme and a lot of the standard theory says that start of armageddon is unpredictable. Beyond the Reach of God was more about a particular “pilloweon” belief Elizer actual had. I don’t need to know X-risk by that name to be a doomsayer.
Doomsday cults believe that humanity will be extinguished because of reasons that are often about failures of humans.
That’s quite different than the belief that there’s nobody out there that cares whether or not humanity will survive. Doomsday cults are usually not engaging in effective x-risk mitigation.
Doomsday cults often intend to capitalise on the upheaval event or atleast try be less screwed than other people when shit hits the fan. In that sense they are trying to be effective inregards to it.
Doomsdayers and pilloweoners both are resolved whether the doom will come or not so effecting makes less sense. However I have seen opinions like “We should delegalize gay marriage in order to reduce hurricanes”. While this posits an outside humanity force that cares, doom is plausible and effectable. So a god that is allowed to be wrathful doesn’t neccesarily pillow humanity.
Doomsday cults often intend to capitalise on the upheaval event or atleast try be less screwed than other people when shit hits the fan. In that sense they are trying to be effective inregards to it.
That’s different than taking effective action to prevent the event. Taking effective action requires approaching x-risks with a mindset that assumes a lot of uncertainty which is different than doomsday cults which by their nature don’t open themselves up for uncertainty.
One of the points it makes is that most conceptions of religion suggest that we don’t live in a world that just gets wiped out by an x-risk for no good reason
I understand this part.
and that this would be bad.
I don’t understand this part.
X-risks wouldbe far lower under most simulation hypotheses[1]. (Not non-existent, for various reasons. Just lower.) I don’t see anyone claiming that that is a strike against simulation hypotheses. (...should it be?)
Especially[2] simulation hypotheses that assume heavily-nested simulations. Fairly simple argument goes like this: assume each universe launches >1 simulation (not necessarily concurrently). Assume there is some variation in x-risk between simulations. Simulations with lower x-risk have a higher mean amount of sub-simulation time than simulations with higher x-risk. Recurse down the tree and you very quickly have the vast majority of total simulation time in universes with little-to-no x-risk[3].
Though even under non-nested simulations I can see reasonably solid arguments that x-risks would be lower[4]. If I’m trying to do a long-term simulation of the economy, having a result that ’well, the economy (and everything else) collapsed two decades in due to a false vacuum collapse” is supremely unhelpful. (...and I’d probably try to edit and resume the simulation from the last checkpoint before the problem was noticable, for that matter.)
This argument only ‘really’ applies if both a) simulations can run more total sub-simulation time than their own ’real’time, and b) the effective decrease in x-risk from a) outweighs the x-risk of the simulation itself. That being said, a) is effectively a prerequisite of heavily-nested simulations.
That being said, I can also see arguments that it would be higher, especially if you include “the simulation being unceremoniously terminated” as an x-risk. Point remains: x-risk calculations change significantly if you accept A. X-risk calculations change significantly if you accept B. Why is this taken as a strike against A and not B?
If it would be factually true that there’s a god out there that protects us against x-risk it would be worth believing in such a god. You however can’t defend a religion that does give the impression that such a god exists with the arguments that the OP listed that make religion attractive such as providing nice rituals and allowing people to supposedly have better marriages.
I don’t think aversion to religion is a good way to summarize the LessWrong discourse around religion. Rationalists generally have complex intellectual positions.
(1) Beyond the Reach of God lays out Eliezers positions from more than a decade ago. One of the points it makes is that most conceptions of religion suggest that we don’t live in a world that just gets wiped out by an x-risk for no good reason and that this would be bad.
If you look at some conceptions of Buddism it might be seen as good to wipe out all sentient life from the earth through a big asteroid because that would end the wheel of suffering. If you believe in Christianity, then a god that created the earth with a purpose wouldn’t allow humanity just to be randomly wiped out.
Even if you grant that religion is compatible with living a normal life, in many forms, it’s unhelpful for thinking straight about x-risk.
(2) More recently you had a conflict around spiritual practice where some argue that a significant portion of community members who engaged in a lot of spiritual practice ended up epistemically worse. I don’t think it’s accurate to describe LessWrong as being united in that aspect.
While that debate is related there are a lot of different things going on and it’s not a purely intellectual debate but one that’s about empiric reality of what happened with people after they engaged in certain practices.
Religious doomsday cults are a kind of standard meme and a lot of the standard theory says that start of armageddon is unpredictable. Beyond the Reach of God was more about a particular “pilloweon” belief Elizer actual had. I don’t need to know X-risk by that name to be a doomsayer.
Doomsday cults believe that humanity will be extinguished because of reasons that are often about failures of humans.
That’s quite different than the belief that there’s nobody out there that cares whether or not humanity will survive. Doomsday cults are usually not engaging in effective x-risk mitigation.
Doomsday cults often intend to capitalise on the upheaval event or atleast try be less screwed than other people when shit hits the fan. In that sense they are trying to be effective inregards to it.
Doomsdayers and pilloweoners both are resolved whether the doom will come or not so effecting makes less sense. However I have seen opinions like “We should delegalize gay marriage in order to reduce hurricanes”. While this posits an outside humanity force that cares, doom is plausible and effectable. So a god that is allowed to be wrathful doesn’t neccesarily pillow humanity.
That’s different than taking effective action to prevent the event. Taking effective action requires approaching x-risks with a mindset that assumes a lot of uncertainty which is different than doomsday cults which by their nature don’t open themselves up for uncertainty.
I understand this part.
I don’t understand this part.
X-risks would be far lower under most simulation hypotheses[1]. (Not non-existent, for various reasons. Just lower.) I don’t see anyone claiming that that is a strike against simulation hypotheses. (...should it be?)
Especially[2] simulation hypotheses that assume heavily-nested simulations. Fairly simple argument goes like this: assume each universe launches >1 simulation (not necessarily concurrently). Assume there is some variation in x-risk between simulations. Simulations with lower x-risk have a higher mean amount of sub-simulation time than simulations with higher x-risk. Recurse down the tree and you very quickly have the vast majority of total simulation time in universes with little-to-no x-risk[3].
Though even under non-nested simulations I can see reasonably solid arguments that x-risks would be lower[4]. If I’m trying to do a long-term simulation of the economy, having a result that ’well, the economy (and everything else) collapsed two decades in due to a false vacuum collapse” is supremely unhelpful. (...and I’d probably try to edit and resume the simulation from the last checkpoint before the problem was noticable, for that matter.)
This argument only ‘really’ applies if both a) simulations can run more total sub-simulation time than their own ’real’time, and b) the effective decrease in x-risk from a) outweighs the x-risk of the simulation itself. That being said, a) is effectively a prerequisite of heavily-nested simulations.
That being said, I can also see arguments that it would be higher, especially if you include “the simulation being unceremoniously terminated” as an x-risk. Point remains: x-risk calculations change significantly if you accept A. X-risk calculations change significantly if you accept B. Why is this taken as a strike against A and not B?
If it would be factually true that there’s a god out there that protects us against x-risk it would be worth believing in such a god. You however can’t defend a religion that does give the impression that such a god exists with the arguments that the OP listed that make religion attractive such as providing nice rituals and allowing people to supposedly have better marriages.