The genie vanishes, taking with it any memory that you ever met a genie. Since you would not remember making the wish, and since you would see no evidence of a wish having been made, you would not regret having made the wish.
This doesn’t work under the definition of “I” in the grandparent:
I is the entity standing here right now, informed about the many different aspects of the future, in parallel if need be (i.e if I am not capable of groking it fully then many versions of me would be focused on different parts, in order to understand each sub part).
I disagree—if facing a wish-twisting genie, then “nothing happens” is a pretty good result. If I knew in advance that I was dealing with an actively hostile genie, I would certainly not regret a null wish even if I knew in advance it would be a null wish.
“Since you would not remember making the wish, and since you would see no evidence of a wish having been made, you would not regret having made the wish” does not.
(It still leaves open the possibility of wishing for an outcome I would be actively pleased with, also, but that’s a matter for the wisher, not the genie.)
Wish granted: horror as the genie/ai runs a matrix with copy after copy of you, brute forcing the granting of possible wishes, most of which turn out to be an absolute disaster. But you aren’t allowed to know that happens, because the AI goes...”insane” is the best word I can think of, but it’s not quite corrrect...trying to grant what is nearly an ungrantable wish, freezing the population into stasis untill it runs out of negentropy and crashes...
Now that’s not to say friendly AI can’t be done, but it WON’T be EASY. If your wish isn’t human-proof, it probably isn’t AI-safe.
Yes, I have. Saying “the genie goes insane because it’s not smart enough to grant your wish” is not how you play corrupt-a-wish. You’re supposed to grant the wish, but with a twist so it’s actually a bad thing.
If the AI can’t figure out the (future) wishes of a single human being, then it is insufficiently intelligent, and thus not the AI you would want in the first place.
The implication, as I see it, is that since (by your definition) any sufficiently intelligent AI will be able to determine (and motivated to follow) the wishes of humans, we don’t need to worry about advanced AIs doing things we don’t want.
1. Arguments from definitions are meaningless.
2. You never stated the second parenthetical, which is key to your argument and also on very shaky ground. There’s a big difference between the AI knowing what you want and doing what you want. “The genie knows but doesn’t care,” as it is said.
3. Have you found a way to make programs that never have unintended side effects? No? Then “we wouldn’t want this in the first place” doesn’t mean “it won’t happen”.
Fixed that for you.
The genie vanishes, taking with it any memory that you ever met a genie. Since you would not remember making the wish, and since you would see no evidence of a wish having been made, you would not regret having made the wish.
This doesn’t work under the definition of “I” in the grandparent:
I disagree—if facing a wish-twisting genie, then “nothing happens” is a pretty good result. If I knew in advance that I was dealing with an actively hostile genie, I would certainly not regret a null wish even if I knew in advance it would be a null wish.
That explanation works, well done.
“Since you would not remember making the wish, and since you would see no evidence of a wish having been made, you would not regret having made the wish” does not.
(It still leaves open the possibility of wishing for an outcome I would be actively pleased with, also, but that’s a matter for the wisher, not the genie.)
Haven’t you ever played the corrupt-a-wish game?
Wish granted: horror as the genie/ai runs a matrix with copy after copy of you, brute forcing the granting of possible wishes, most of which turn out to be an absolute disaster. But you aren’t allowed to know that happens, because the AI goes...”insane” is the best word I can think of, but it’s not quite corrrect...trying to grant what is nearly an ungrantable wish, freezing the population into stasis untill it runs out of negentropy and crashes...
Now that’s not to say friendly AI can’t be done, but it WON’T be EASY.
If your wish isn’t human-proof, it probably isn’t AI-safe.
Yes, I have. Saying “the genie goes insane because it’s not smart enough to grant your wish” is not how you play corrupt-a-wish. You’re supposed to grant the wish, but with a twist so it’s actually a bad thing.
perhaps I didn’t make the whole “it goes and pauses the entire world while trying to grant your wish” part clear enough...
Trying and failing to grant the wish is not the same as granting it, but it’s actually terrible.
If the AI can’t figure out the (future) wishes of a single human being, then it is insufficiently intelligent, and thus not the AI you would want in the first place.
The implication, as I see it, is that since (by your definition) any sufficiently intelligent AI will be able to determine (and motivated to follow) the wishes of humans, we don’t need to worry about advanced AIs doing things we don’t want.
1. Arguments from definitions are meaningless.
2. You never stated the second parenthetical, which is key to your argument and also on very shaky ground. There’s a big difference between the AI knowing what you want and doing what you want. “The genie knows but doesn’t care,” as it is said.
3. Have you found a way to make programs that never have unintended side effects? No? Then “we wouldn’t want this in the first place” doesn’t mean “it won’t happen”.