Two objections to this: Firstly you have to extrapolate from the chimp-to-human range and into superintelligence range. The gradient may not be the same in the two. Second, it seems to me that the more intelligent humans are, the more “the other humans in my tribe” becomes the dominant part of your environment; this leads to increased returns to intelligence, and consequently you do get an increasing optimisation pressure.
To your first objection, I agree that “the gradient may not be the same in the two,” when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer’s stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the “steep” area extends, so I think your doubt is justified.
Your second objection also sounds reasonable to me, but I don’t know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven’t investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)