I have some similar problems and I’ll try to explain them.
So I think that some of my failings may be attributable to having many contradictory values rather than to being failures of rationality. An example of values that oppose one another are the desire for social interaction and the strong sense of guilt that I experience at any social failing. If I try to optimize for one then I take a hit from the other and I still don’t net very many hedons. The rational action here would be to modify and remove the errant value, but that’s difficult.
In Neurotic spells in the past I’ve also acquired some desire to see myself fail as well and this flares up from time to time.
I can be fairly rational while alone, but in social situations I seem to get overwhelmed with negative emotion and my rationality gets ‘knocked out’ . Well, not literally knocked out, it’s just that I update on all sorts of misperceived social cues and wind up with weird aliefs and beliefs, temporarily. I can leave a social situation anticipating on different levels that a good friend doesn’t actually like me, or some new idea that I had and was confidant about, is actually just terrible (in situations were neither of these was actually the case). I can point to some large past irrationalities that were the source of some of this, but the values are persistent.
Raw exposure to social situations has reduced some of my wild update problems, but I works very well if I actually interact which is something that is hard to ensure. I also don’t that much else out of raw social interaction, so it’s not an attractive option.
If anyone wants to link me articles on here about thinking about and dealing with perverse desires I’d love that. It would also be awesome if anyone could point out mistakes that I seem to be making here.
This sounds pretty similar to a lot of my problems. Using this community’s terminology, I can have all the beliefs I want, but if I have sufficiently powerful overriding aliefs, I’m screwed—since the alief-guided motivational system is actually closer to the motor control subprocessors than the belief-guided motivational system (aka “Amygdala hijack”).
Worse, the alief-driven submodule is operating on its own utility table, which often is a nearly antiparallel eigenvector to my belief-driven submodule’s utility table. So I have two submodules each with strong impetus vectors towards/away from various attractors within the solution domain, and… well, thrashing happens.
Yeah, it’s supposed to do that. It’s kind of a problem when you have to unplug the TV to get work done, or to change departments to avoid letting the hot coworker seduce you. It does have advantages when you’re not very good at lofty decisions, though; you can see the problem with an organism that can just decide eating is wrong and starve to death.
People normally react to that by setting modest goals, acquiring the right habits to consistently achieve them, and then working their way up. Rewarding both systems (“After 50 minutes of work, eat a chocolate”) also helps.
I have some similar problems and I’ll try to explain them.
So I think that some of my failings may be attributable to having many contradictory values rather than to being failures of rationality. An example of values that oppose one another are the desire for social interaction and the strong sense of guilt that I experience at any social failing. If I try to optimize for one then I take a hit from the other and I still don’t net very many hedons. The rational action here would be to modify and remove the errant value, but that’s difficult.
In Neurotic spells in the past I’ve also acquired some desire to see myself fail as well and this flares up from time to time. I can be fairly rational while alone, but in social situations I seem to get overwhelmed with negative emotion and my rationality gets ‘knocked out’ . Well, not literally knocked out, it’s just that I update on all sorts of misperceived social cues and wind up with weird aliefs and beliefs, temporarily. I can leave a social situation anticipating on different levels that a good friend doesn’t actually like me, or some new idea that I had and was confidant about, is actually just terrible (in situations were neither of these was actually the case). I can point to some large past irrationalities that were the source of some of this, but the values are persistent.
Raw exposure to social situations has reduced some of my wild update problems, but I works very well if I actually interact which is something that is hard to ensure. I also don’t that much else out of raw social interaction, so it’s not an attractive option.
If anyone wants to link me articles on here about thinking about and dealing with perverse desires I’d love that. It would also be awesome if anyone could point out mistakes that I seem to be making here.
This sounds pretty similar to a lot of my problems. Using this community’s terminology, I can have all the beliefs I want, but if I have sufficiently powerful overriding aliefs, I’m screwed—since the alief-guided motivational system is actually closer to the motor control subprocessors than the belief-guided motivational system (aka “Amygdala hijack”).
Worse, the alief-driven submodule is operating on its own utility table, which often is a nearly antiparallel eigenvector to my belief-driven submodule’s utility table. So I have two submodules each with strong impetus vectors towards/away from various attractors within the solution domain, and… well, thrashing happens.
Yeah, it’s supposed to do that. It’s kind of a problem when you have to unplug the TV to get work done, or to change departments to avoid letting the hot coworker seduce you. It does have advantages when you’re not very good at lofty decisions, though; you can see the problem with an organism that can just decide eating is wrong and starve to death.
People normally react to that by setting modest goals, acquiring the right habits to consistently achieve them, and then working their way up. Rewarding both systems (“After 50 minutes of work, eat a chocolate”) also helps.