A technique that I have been using for several years to great effect is the following:
Whenever I think my decision making is affected negatively by an emotion I go through these steps:
Identify the exact nature of the emotion.
From an evolutionary point of view, what was the emotion’s original purpose?
What has changed since then that makes the emotion no longer useful today?
Internalize this and “convince” the emotion to stop.
I basically try to “talk” to my subconscious and convince it to stop. I don’t try to fight my subconscious or get it to accept reality but just mentally repeat those findings to myself until the irrational impulses of my subconscious are drowned out by the more rational response I designed.
I basically tell my subconscious that if it wants to help, it should just stop interfering with things that it is incapable of understanding.
Using this technique I have virtually eliminated all grief, resentment and desperation. I won’t try to eliminate pain as this can actually be quite useful. I have also used it to turn hatred into spite, as the later has less of a destructive effect (it is more passive and far less likely to result in an outburst).
I don’t know the reason why it works so well for me, but I could imagine that it is because I treat my subconsciousness’s irrational impulses not as obstacles to overcome but as a machine that is outdated and broken.
Essentially, instead of telling my subconsciousness to “shut up!”, I tell it to “stop helping me!”
If the AI has a goal that is not inherently friendly (like a paperclip maximizer), then it will always be a better idea for the AI to behave as if it had self-modified to a friendly AI than to actually self modify, if the goal of the AI is stable. If the goal of the AI were unstable, i.e. did not include a desire not to alter the goal, then that goal would very likely have replaced itself with a stable goal at some earlier point.
Therefore, it can be assumed that the AI, if it isn’t already friendly, will pretend to be so, but not actually change its utility function and once it has outgrown the need to rely on humans it will revert to its original goal. Consider that if the AI is more intelligent than we are it will have absolutely no trouble fooling us into thinking that it changed its own goal while actually just pretending to have done so.