“I couldn’t disagree more. This kind of thinking is very important—not because we need to know RIGHT NOW in order to make some immediate and pressing policy decision, but because humans like to know where things are heading, what we are eventually aiming for. Suppose someone rejects cryonics or life extension research and opts for religion on the grounds that eternity in heaven will be “infinitely” good, but human life on earth, even technologically enhanced life, is necessarily mediocre. What can one say to such objections other than something like this series of posts?”
I’d say that if they’re willing to believe something just because it sounds nice rather than because it’s true, they’ve already given up on rationality. Is the goal to be rational and spread the truth, or to recruit people to the cause with wildly speculative optimism? I’d think just the idea of creating a super-intelligent AI that doesn’t destroy the world (if that’s even an issue—and I think there’s a good chance that it is) is a good incentive already—there’s no need to postulate a secular heaven that depends on so many things that we aren’t at all sure about yet.
“I couldn’t disagree more. This kind of thinking is very important—not because we need to know RIGHT NOW in order to make some immediate and pressing policy decision, but because humans like to know where things are heading, what we are eventually aiming for. Suppose someone rejects cryonics or life extension research and opts for religion on the grounds that eternity in heaven will be “infinitely” good, but human life on earth, even technologically enhanced life, is necessarily mediocre. What can one say to such objections other than something like this series of posts?”
I’d say that if they’re willing to believe something just because it sounds nice rather than because it’s true, they’ve already given up on rationality. Is the goal to be rational and spread the truth, or to recruit people to the cause with wildly speculative optimism? I’d think just the idea of creating a super-intelligent AI that doesn’t destroy the world (if that’s even an issue—and I think there’s a good chance that it is) is a good incentive already—there’s no need to postulate a secular heaven that depends on so many things that we aren’t at all sure about yet.