“Technical implication: My worst enemy is an instance of my self.”

I think this one needs more discussion, it looks like a really valuable and interesting train of thought.

In “You’ll be who you care about,” Stuart Armstrong wrote -

Instead of wondering whether we should be selfish towards our future selves, let’s reverse the question. Let’s define our future selves as agents that we can strongly influence, and that we strongly care about.

Wedrifdid replied with this gem of insight (bold added) -

Technical implication: My worst enemy is an instance of my self.

Actual implication: Relationships that don’t include a massive power differential or a complete lack of emotional connection are entirely masturbatory.

It is critical to consider that thing which is “future agents that we strongly care about and can influence” but calling those things our ‘future selves’ makes little sense unless they are, well, actually our future selves.

Pedanterriffic feels the same way about it I did -

This explains so much.

The basic point has already gotten some good discussions, but let’s talk about the implication. Assuming for a moment that your future self is an agent you can strongly influence and strongly care about, does that make your worst enemy an instance of yourself?

Let’s not get too hung up on the words “worst enemy”—I think swapping in “main adversary” or “chief competitor” makes the point stand. Your thoughts?