I am tentatively interpreting your remark about “not wanting to leave out those I have *plonked” as an indication that you might read comments by such individuals. Therefore, I’m am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especially if they haven’t seen your earlier comments.
I think this recurrent idea of how humans are so dumb therefore we need AI is a sort of copout. Basically you’re saying I am way to lazy to do the hard science so I will hope that AI will solve it. What makes it even more of a copout is that you personally are probably not even involved in the creation of AI.
The post you are replying to made no mention of AI at all. You seem to be focusing on the word “dumb” and assuming a very narrow definition. This is interesting in that in reading the remark I interpreted “dumb” as almost exactly what you think it is not talking about, that is lacking knowledge and technology. Incidentally, I’m not sure how AI would not fall in the technology category.
Christians believe in salvation through Jesus—you guys believe AI, cryonics and technology will save you.
That’s an interesting claim. As one of the posters here who seems to annoy you the most, I find it interesting that I a) estimate a very tiny probability for a Singularity type event involving AI b) am not signed up cryonics (although I am considering it) and c) estimate a very small chance that technology in the next fifty years will allow indefinite extensions of life spans. While I am a sample size of one, I don’t think I’m that far off from the usual LW contributor (although obviously this could be due to the standard bias of humans assuming that others are similar to them.)
Christians believe in an all powerful God—you hope to create an all powerful AI the ideal god that does what you want and allows you to be as immoral as you like
I can’t speak directly for the individuals who want to create a strong very powerful singleton AI, but your claim that they wish to do to allow them to be as immoral as they like seems false. Indeed, much of the discussion about such AIs centers around taking human morality and how one would get an AI to obey general human moral and ethical norms. So how one gets that they want to be as immoral as they want is not at all clear.
I also don’t understand how trying to create an AI that’s powerful is the same as believing in an all powerful deity that exists independently of humans.
Christians believe in the afterlife and resurrection after death—you guys think that cryonics and computers are your tickets to resurrection and immortality.
Curiously, most of the pro-cryonics individuals here estimate low probabilities of successful cryonics I haven’t see anyone here make an explicit estimate that was more than 25%. ( If anyone here does estimate a higher chance I’d be curious to hear it and see what their logic is.) I’ve seen multiple people here who put the estimate at <10% I’m pretty sure that very few religious individuals who believe in an afterlife would put that low a probability estimate.
I could keep going but I think my point is clear the parallels are to obvious to deny. Hence forth we shall term your beliefs Christianity v2.0.
I’m also not sure why you choose to focus so much on Christianity as the comparison religion. Many religions have aspects very similar to what you laid out in your comparison. Zoroastrianism has many elements that pre-dated Christianity, and various Jewish sects also had similar beliefs. Moreover, if any religion gets to be Christianity v2.0 it would be Islam, with possibly Mormonism or the Bahai being 3.0.
Now, it is true that many transhumanists and Singularitarians (note that these are not necessarily the same thing) do have attitudes that come across as intensely religious in form. These issues have been discussed here before (note how those comments were voted up which shows that such criticism when properly targeted and well thought out is considered worth discussing here. This provides an interesting contrast to your remarks. It is also difficult to reconcile such upvoting with your model of LW as full of fanatical transhumanist Singularitarians.)
You also seem to be again trying to score some sort of rhetorical points with name calling and labeling. I don’t think that almost anyone here, either posters or readers is going to be more persuaded by your opinions if you use that term. Frankly, as someone who finds a lot of the more borderline religious aspects of transhumanism and Singularitarianism to be pretty disturbing, reading your remarks makes me feel more sympathetic to those viewpoints simply out of an emotional reaction against your poor arguments.
I know you guys think this is rationalist community but your views on such things are so warped as to render your efforts nearly fruitless in the area of rationalism
Three questions: First, What aspects of LW’s regard to rationalism do you think are seriously warped? Second, do you think the community is monolithic in its attitude towards rationality? (I for example am both not an epistemological Bayesian and also think that LW frequently downplays to its own detriment the complicated history of science and scientific discoveries. But I don’t think I’d label things as so warped. ) Third, if you think that LW’s rationalism is so warped what do you think you are gaining by posting here?
The problem with religious beliefs is not that they are false (they don’t have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.
I am tentatively interpreting your remark about “not wanting to leave out those I have *plonked” as an indication that you might read comments by such individuals. Therefore, I’m am going to reply to this remark. I estimate a small probability (< 5%) that you will actually consider what I have to say in this comment, but I also estimate that explicitly stating that estimate increase the probability rendering the estimate possibly as high as 10%. I estimate a much higher chance that this remark will be of some benefit to readers here, especially if they haven’t seen your earlier comments.
The post you are replying to made no mention of AI at all. You seem to be focusing on the word “dumb” and assuming a very narrow definition. This is interesting in that in reading the remark I interpreted “dumb” as almost exactly what you think it is not talking about, that is lacking knowledge and technology. Incidentally, I’m not sure how AI would not fall in the technology category.
That’s an interesting claim. As one of the posters here who seems to annoy you the most, I find it interesting that I a) estimate a very tiny probability for a Singularity type event involving AI b) am not signed up cryonics (although I am considering it) and c) estimate a very small chance that technology in the next fifty years will allow indefinite extensions of life spans. While I am a sample size of one, I don’t think I’m that far off from the usual LW contributor (although obviously this could be due to the standard bias of humans assuming that others are similar to them.)
I can’t speak directly for the individuals who want to create a strong very powerful singleton AI, but your claim that they wish to do to allow them to be as immoral as they like seems false. Indeed, much of the discussion about such AIs centers around taking human morality and how one would get an AI to obey general human moral and ethical norms. So how one gets that they want to be as immoral as they want is not at all clear.
I also don’t understand how trying to create an AI that’s powerful is the same as believing in an all powerful deity that exists independently of humans.
Curiously, most of the pro-cryonics individuals here estimate low probabilities of successful cryonics I haven’t see anyone here make an explicit estimate that was more than 25%. ( If anyone here does estimate a higher chance I’d be curious to hear it and see what their logic is.) I’ve seen multiple people here who put the estimate at <10% I’m pretty sure that very few religious individuals who believe in an afterlife would put that low a probability estimate.
I’m also not sure why you choose to focus so much on Christianity as the comparison religion. Many religions have aspects very similar to what you laid out in your comparison. Zoroastrianism has many elements that pre-dated Christianity, and various Jewish sects also had similar beliefs. Moreover, if any religion gets to be Christianity v2.0 it would be Islam, with possibly Mormonism or the Bahai being 3.0.
Now, it is true that many transhumanists and Singularitarians (note that these are not necessarily the same thing) do have attitudes that come across as intensely religious in form. These issues have been discussed here before (note how those comments were voted up which shows that such criticism when properly targeted and well thought out is considered worth discussing here. This provides an interesting contrast to your remarks. It is also difficult to reconcile such upvoting with your model of LW as full of fanatical transhumanist Singularitarians.)
You also seem to be again trying to score some sort of rhetorical points with name calling and labeling. I don’t think that almost anyone here, either posters or readers is going to be more persuaded by your opinions if you use that term. Frankly, as someone who finds a lot of the more borderline religious aspects of transhumanism and Singularitarianism to be pretty disturbing, reading your remarks makes me feel more sympathetic to those viewpoints simply out of an emotional reaction against your poor arguments.
Three questions: First, What aspects of LW’s regard to rationalism do you think are seriously warped? Second, do you think the community is monolithic in its attitude towards rationality? (I for example am both not an epistemological Bayesian and also think that LW frequently downplays to its own detriment the complicated history of science and scientific discoveries. But I don’t think I’d label things as so warped. ) Third, if you think that LW’s rationalism is so warped what do you think you are gaining by posting here?
The problem with religious beliefs is not that they are false (they don’t have to be), but that they are believed for the purpose of signaling belonging to a group, rather than because they are true. This does cause them to often be wrong or not even wrong, but the wrongness is not the problem, epistemic practices that lead to them are. Correspondingly, the reasons for a given religious belief turning out to be wrong are a different kind of story from the reasons for a given factual belief turning out to be wrong. The comparison of factual mistakes in religious beliefs and factual mistakes made by people who try to figure things out is a shallow analogy that glosses over the substance of the processes.