One thing I am noting about some of the philosophical quandaries raised above about both teleportation and enhancements is that it only considers a single life, without linking it to others.
For instance, assume you are attempting to save your child from a burning building. You can either teleport in, grab your child, and teleport out with a near perfect success rate (although both you and your child will have teleported, you twice, your child once) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.
The obvious answer to me appears to be “I pick the Teleporter instead of the lower X.”
And If I consider the alternative:
You are attempting to save your child from a burning building. You can either take standard enhancements, and then run in, grab your child, enhance them, and then run out with a near perfect success rate (although both you and your child be enhanced, permanently) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.
The obvious answer to me still appears to be “I pick the enhancements instead of the lower X.”
It seems like if a person were worried about either teleportation or enhancements, they would have to at have a counter argument such as “Well, X is lower, but it’s still pretty high and above some threshold level, so in a case like that I think I’m going to either not have me and my child teleport or not have me and my child take the enhancements: I’ll go for the manual rescue.”
That argument just doesn’t seem convincing to me. I’ve tried mentally steelmanning it to get a better one, but I can’t seem to get anywhere, particularly when considering the perspective of being the person inside the building who needed help, and the possibility that given a strong enough stance against the procedures, the person outside the building could plausibly think within their value system “It would be better to let the person burn to death than for me risk my life to save them at such a low X, or to use procedures that will harm us both according to my values.”
Am I missing something that makes these types of counterargument more persuasive than I am giving them credit for?
My understanding of what makes these types of counterargument persuasive is a belief system that goes roughly like this:
I comprise a set of attributes. For convenience, I categorize those attributes into two sets, S1 and S2, such that the “teleporter” preserves S1 but destroys S2. What comes out of the “teleporter,” S1, is similar to me but not identical; the difference is S2. S2 is so valuable that even an X% chance of preserving (S1 + S2) for some very low X is more valuable than a 100% chance of preserving S1.
My own response to this is that I see no reason to value S2 at all.
One thing I am noting about some of the philosophical quandaries raised above about both teleportation and enhancements is that it only considers a single life, without linking it to others.
For instance, assume you are attempting to save your child from a burning building. You can either teleport in, grab your child, and teleport out with a near perfect success rate (although both you and your child will have teleported, you twice, your child once) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.
The obvious answer to me appears to be “I pick the Teleporter instead of the lower X.”
And If I consider the alternative:
You are attempting to save your child from a burning building. You can either take standard enhancements, and then run in, grab your child, enhance them, and then run out with a near perfect success rate (although both you and your child be enhanced, permanently) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.
The obvious answer to me still appears to be “I pick the enhancements instead of the lower X.”
It seems like if a person were worried about either teleportation or enhancements, they would have to at have a counter argument such as “Well, X is lower, but it’s still pretty high and above some threshold level, so in a case like that I think I’m going to either not have me and my child teleport or not have me and my child take the enhancements: I’ll go for the manual rescue.”
That argument just doesn’t seem convincing to me. I’ve tried mentally steelmanning it to get a better one, but I can’t seem to get anywhere, particularly when considering the perspective of being the person inside the building who needed help, and the possibility that given a strong enough stance against the procedures, the person outside the building could plausibly think within their value system “It would be better to let the person burn to death than for me risk my life to save them at such a low X, or to use procedures that will harm us both according to my values.”
Am I missing something that makes these types of counterargument more persuasive than I am giving them credit for?
My understanding of what makes these types of counterargument persuasive is a belief system that goes roughly like this:
My own response to this is that I see no reason to value S2 at all.
But I accept that other people do.