Based on your comment on Ricraz’s answer, “something that is bad for me”, I will make a guess at what you mean. Let me know if it answers your question.
Objectively (outside-perspective):
“Bad” requires defining. Define the utility function, and the answer falls out.
Depending on your goals and the context of being hurt, it might be negative, positive, or a mix of both! (ex. being unintentionally burned while cooking, being a masochist, and being burned to protect a clumsy loved one, respectively)
Subjectively:
If you mean negative utility as in the negative valence of an observation, then I would argue that negative valence is a signal telling you how well you’re achieving a goal. (this is from Kaj’s Non-mystical sequence)
From a multi-agent view, you may have an agent giving you valence on how well you’re doing at a goal (say a video game). If you’re really invested in the game, you might fuse with that sub-agent (identify that with a “self” tag), and suffer when you fail at the game. If you’re separated from the game, you can still receive information about how well you’re doing, but you don’t suffer.
The more equanimity you have (you’re okay with things as they are), the less you personally suffer. Though you can still be aware of the negative/positive signal of valence.
Based on your comment on Ricraz’s answer, “something that is bad for me”, I will make a guess at what you mean. Let me know if it answers your question.
Objectively (outside-perspective):
“Bad” requires defining. Define the utility function, and the answer falls out.
Depending on your goals and the context of being hurt, it might be negative, positive, or a mix of both! (ex. being unintentionally burned while cooking, being a masochist, and being burned to protect a clumsy loved one, respectively)
Subjectively:
If you mean negative utility as in the negative valence of an observation, then I would argue that negative valence is a signal telling you how well you’re achieving a goal. (this is from Kaj’s Non-mystical sequence)
From a multi-agent view, you may have an agent giving you valence on how well you’re doing at a goal (say a video game). If you’re really invested in the game, you might fuse with that sub-agent (identify that with a “self” tag), and suffer when you fail at the game. If you’re separated from the game, you can still receive information about how well you’re doing, but you don’t suffer.
The more equanimity you have (you’re okay with things as they are), the less you personally suffer. Though you can still be aware of the negative/positive signal of valence.
“”Bad” requires defining. Define the utility function, and the answer falls out”—Exactly. How should it be defined?