It’s a rather unavoidable side-effect of claiming that you know the optimal way to fulfill one’s utility function, especially if that claim sounds highly unusual (Unite the human species in making a friendly AI that will create Utopia). There are many groups that make such claims, and either one or none of them can be right. Most people(Who haven’t already bought into a different philosophy of life) think it’s the later, and thus tend not to take someone seriously when they make extraordinary claims.
Until recognition of the Singularity’s imminence and need for attention enters mainstream scientific thought, the people most likely to join us (Scientifically-Literate Atheists and Truth-Lovers) will not seriously consider our claims. I haven’t read nearly as much about the nonexistence of Zues as I have about the nonexistence of Yaweh, because the number of intelligent people who believe in Zues is insignificant compared to the number of educated Christians. So when 99% of the developed world isn’t focusing on friendly-AI-theory, it was difficult for me to come to the conclusion that Richard Dawkins and Stephen Hawking and Stephen Fry were all ignorant of one of the most important things on the planet. A few months ago I gave no more thought to cryonics than to cryptozoology, and without MoR I doubt anything would have changed.
Is the goal of the community really to get everyone into the one task of creating FAI? I’m kind of new here, but I’m personally interested in a less direct but maybe more certain (I don’t know the hard numbers) (but, I feel, its synergistic), goal of achieving a stable post-scarcity economy which could free up a lot more people to become hackers/makers of technology and participate in the collective commons, but I’m interested in FAI and particularly machine ethics, and I hang out here because of the rationality and self improvement angles. In fact I got into my current academic track (embedded systems) because I’m interested in robotics and embodied intelligence, and probably got started reading Hofstadter stuff and trying to puzzle out how minds work.
“Come for the rationality… stay for the friendly AI” maybe?
It’s a rather unavoidable side-effect of claiming that you know the optimal way to fulfill one’s utility function, especially if that claim sounds highly unusual (Unite the human species in making a friendly AI that will create Utopia). There are many groups that make such claims, and either one or none of them can be right. Most people(Who haven’t already bought into a different philosophy of life) think it’s the later, and thus tend not to take someone seriously when they make extraordinary claims.
Until recognition of the Singularity’s imminence and need for attention enters mainstream scientific thought, the people most likely to join us (Scientifically-Literate Atheists and Truth-Lovers) will not seriously consider our claims. I haven’t read nearly as much about the nonexistence of Zues as I have about the nonexistence of Yaweh, because the number of intelligent people who believe in Zues is insignificant compared to the number of educated Christians. So when 99% of the developed world isn’t focusing on friendly-AI-theory, it was difficult for me to come to the conclusion that Richard Dawkins and Stephen Hawking and Stephen Fry were all ignorant of one of the most important things on the planet. A few months ago I gave no more thought to cryonics than to cryptozoology, and without MoR I doubt anything would have changed.
Is the goal of the community really to get everyone into the one task of creating FAI? I’m kind of new here, but I’m personally interested in a less direct but maybe more certain (I don’t know the hard numbers) (but, I feel, its synergistic), goal of achieving a stable post-scarcity economy which could free up a lot more people to become hackers/makers of technology and participate in the collective commons, but I’m interested in FAI and particularly machine ethics, and I hang out here because of the rationality and self improvement angles. In fact I got into my current academic track (embedded systems) because I’m interested in robotics and embodied intelligence, and probably got started reading Hofstadter stuff and trying to puzzle out how minds work.
“Come for the rationality… stay for the friendly AI” maybe?
Please don’t talk about ‘the’ goal of the community as if there’s only one. There are many.
That’s what I was wondering, thank you for providing the link to that post. I wasn’t sure how to read Locke’s statement.