if people do have a religion-shaped hole (I can tell at least some do), what are they supposed to do about it? Ignoring it to focus on real things will not plug the hole. Modifying your brain or creating a real godlike thing is not possible yet. So what are we to do?
I think that the best thing to do is to plug that hole with humanism or, better still, with transhumanism. Of course this is the opposite to what Eliezer is saying in the post, and it has its problems, but I think that it is the best of a bad lot of options. If you are going to think of transhumanism as your “life stance” [polite terminology for religion substitute], then there are some things you should be very careful about:
Resist the urge to turn H+ into a cult: don’t assert the truth of a statement just because it supports the transhumanist stance. Support a statement if and only if it is true, and cross your fingers that the truth will turn out to be rather nice ;-0
Transhumanism has no high priests, and we should strive to make sure that it never does. Avoid leader worship.
Don’t make AGI into a Super Happy Agent, but do make a successful friendly AGI into one. There’s a difference. It is very likely that an unfriendly AGI will do very nasty things to us, but it seems likely to me that a friendly AGI will surpass or wildest dreams in terms of good outcomes. [Do we have evidence for this? Is this pure wishful thinking? Well… I think a reasonably defensible and non tautological argument could be made, given a suitable definition of “good outcome”, of “friendly” and of “intelligent”, using our observations of various levels of intelligent agents which are less intelligent than ourselves. Actually the difficult bit is defining “good outcome”]
(credit for these goes to Eliezer, you can find them all here on overcoming bias)
if people do have a religion-shaped hole (I can tell at least some do), what are they supposed to do about it? Ignoring it to focus on real things will not plug the hole. Modifying your brain or creating a real godlike thing is not possible yet. So what are we to do?
I think that the best thing to do is to plug that hole with humanism or, better still, with transhumanism. Of course this is the opposite to what Eliezer is saying in the post, and it has its problems, but I think that it is the best of a bad lot of options. If you are going to think of transhumanism as your “life stance” [polite terminology for religion substitute], then there are some things you should be very careful about:
Resist the urge to turn H+ into a cult: don’t assert the truth of a statement just because it supports the transhumanist stance. Support a statement if and only if it is true, and cross your fingers that the truth will turn out to be rather nice ;-0
Transhumanism has no high priests, and we should strive to make sure that it never does. Avoid leader worship.
Don’t make AGI into a Super Happy Agent, but do make a successful friendly AGI into one. There’s a difference. It is very likely that an unfriendly AGI will do very nasty things to us, but it seems likely to me that a friendly AGI will surpass or wildest dreams in terms of good outcomes. [Do we have evidence for this? Is this pure wishful thinking? Well… I think a reasonably defensible and non tautological argument could be made, given a suitable definition of “good outcome”, of “friendly” and of “intelligent”, using our observations of various levels of intelligent agents which are less intelligent than ourselves. Actually the difficult bit is defining “good outcome”]
(credit for these goes to Eliezer, you can find them all here on overcoming bias)