Difficult question. I believe thoselinks are relevant, but your formulation also implies the threat of an arms race.
My best shot for now would be this: avoid self-modification. The top priority right now is defending people from the potential harmful effects of this thing you created, because someone less benevolent might stumble upon it soon. Find people who share this sentiment and use the speedup together to think hard about the problem of defense.
Perhaps an “anti arms race” would be a more accurate notion. ie, in once sense, waiting for the mathematics of FAI to be solved would be preferable. Would be safer to get to a point that we can mathematically ensure that the thing will be well behaved.
On the other hand, while waiting, how many will suffer and die irretrievably? If the cost for waiting was much smaller, then the answer of “wait for the math and construct the FAI rather than trying to patchwork update a spaghetti coded human mind” would be, to me, the clearly preferable choice.
Even given avoiding self modification, massive speedup would still correspond to significant amount of power. We already know how easily humans… change… with power. And when sped up, obviously people not sped up would seem different, “lesser”… helping to reinforce the “I am above them” sense. One might try to solve this by figuring out how to self modify enough to, well, not to that. But self modification itself being a starting point for, if one does not do it absolutely perfectly, potential disaster, well...
Anyways, so your suggestion would basically be “only use the power to, well, defend against the power” rather than use it to actually try to fix some of the annoying little problems in the world (like… death and and and and and… ?)
FAI is one possible means of defense, there might be others.
You shouldn’t just wait for FAI, you should speed up FAI developers too because it’s a race.
I think the strategy of developing a means of defense first has higher expected utility than fixing death first, because in the latter case someone else who develops uploading can destroy/enslave the world while you’re busy fixing it.
Difficult question. I believe those links are relevant, but your formulation also implies the threat of an arms race.
My best shot for now would be this: avoid self-modification. The top priority right now is defending people from the potential harmful effects of this thing you created, because someone less benevolent might stumble upon it soon. Find people who share this sentiment and use the speedup together to think hard about the problem of defense.
Perhaps an “anti arms race” would be a more accurate notion. ie, in once sense, waiting for the mathematics of FAI to be solved would be preferable. Would be safer to get to a point that we can mathematically ensure that the thing will be well behaved.
On the other hand, while waiting, how many will suffer and die irretrievably? If the cost for waiting was much smaller, then the answer of “wait for the math and construct the FAI rather than trying to patchwork update a spaghetti coded human mind” would be, to me, the clearly preferable choice.
Even given avoiding self modification, massive speedup would still correspond to significant amount of power. We already know how easily humans… change… with power. And when sped up, obviously people not sped up would seem different, “lesser”… helping to reinforce the “I am above them” sense. One might try to solve this by figuring out how to self modify enough to, well, not to that. But self modification itself being a starting point for, if one does not do it absolutely perfectly, potential disaster, well...
Anyways, so your suggestion would basically be “only use the power to, well, defend against the power” rather than use it to actually try to fix some of the annoying little problems in the world (like… death and and and and and… ?)
FAI is one possible means of defense, there might be others.
You shouldn’t just wait for FAI, you should speed up FAI developers too because it’s a race.
I think the strategy of developing a means of defense first has higher expected utility than fixing death first, because in the latter case someone else who develops uploading can destroy/enslave the world while you’re busy fixing it.