Selfish reasons for FAI

Let’s take for granted that pursuing FAI is the best strategy for researchers interested in the future of all humanity. However, let’s also assume that controlling unfriendly AI is not completely impossible. I would like to see arguments on why FAI may or may not be the best strategy for AGI researchers who are solely interested in selfish values: i.e., personal status, curiosity, well-being of their loved ones, etc.

I believe such discussion is important because i) all researchers are to some extent selfish and ii) it may be unwise to ignore researchers who fail to commit to perfect altruism. I, myself, do not know how selfish I would be if I were to become an AGI researcher in the future.

EDIT: Moved some of the original post content to a comment, since I suspect it was distracting from my main point.