I no longer find it totally implausible that imagined people might, if modeled in enough detail, be in some sense conscious—it seems unlikely to me that human self-modeling and other-modeling logic would end up being that different—but even if we take that as given, there’s a couple of problems with threatening to imagine someone in some unpleasant situation.
The basic issue is asymmetry of information. You might be able to imagine someone that thinks or even reliably acts like your enemy; but, no matter how good you are at personality modeling, they aren’t going to have access to all, or even much, of your enemy’s memories and experiences. Lacking that, I wouldn’t say your imagined enemy is cognitively equivalent to your real enemy in a way that’d make the threat hold up.
(Skynet, by contrast, might be able to reproduce all that information by some means—brain scanning, say, or some superhuman form of induction.)
I no longer find it totally implausible that imagined people might, if modeled in enough detail, be in some sense conscious—it seems unlikely to me that human self-modeling and other-modeling logic would end up being that different—but even if we take that as given, there’s a couple of problems with threatening to imagine someone in some unpleasant situation.
The basic issue is asymmetry of information. You might be able to imagine someone that thinks or even reliably acts like your enemy; but, no matter how good you are at personality modeling, they aren’t going to have access to all, or even much, of your enemy’s memories and experiences. Lacking that, I wouldn’t say your imagined enemy is cognitively equivalent to your real enemy in a way that’d make the threat hold up.
(Skynet, by contrast, might be able to reproduce all that information by some means—brain scanning, say, or some superhuman form of induction.)