https://wiki.lesswrong.com/wiki/Paperclip_maximizer is the canonical example of over-simplified goal optimization. I bring it up mostly as a reminder that getting your motivational model wrong can lead to undesirable actions and results.
Which leads to my main point. You’re recommending one type of pleasure over another, based on it being more aligned with your non-pleasure-measured goals. I’m wondering why you are arguing for this, as opposed to just pursuing the goals directly, without consideration of pleasure.
Ah, now I’ve got what you mean. Thanks for referring me to that thought experiment, I don’t have much prior knowledge on the field of AI so that was definitely a new insight for me.
I see now that my original shortform did not explicitly state that my terminal value was indeed the fulfillment of important goals. I was reflecting more on the distinction between pleasurable feelings that led to distraction & bad habits, vs ones that led to the actual fulfillment of goals. It was a personal reminder to experience the latter in place of the former, as much as I can.
Now, I hold the view that pleasure can be a useful tool in the pursuit of my goals. It is a means to that end. An important caveat that your response reminded me of, though, is that sometimes pursuing goals might not be immediately pleasurable, so it might be wise not to naïvely expect pleasure from every part of that process.