In fact it seems that the linked argument relies on a version of the orthogonality thesis instead of being refuted by it:
For almost any ultimate goal—joy, truth, God, intelligence, freedom, law—it would be possible to do it better (or faster or more thoroughly or to a larger population) given superintelligence (or nanotechnology or galactic colonization or Apotheosis or surviving the next twenty years).
Nothing about the argument contradicts “the true meaning of life”—which seems in that argument to be effectively defined as “whatever the AI ends up with as a goal if it starts out without a goal”—being e.g. paperclips.
The second part of the sentence, yes. The bolded one seems to acknowledge AIs can have different goals, and I assume that version of EY wouldn’t count “God” as a good goal.
Another more relevant part:
Presumably this goal object can be anything.
I agree that EY rejected the argument because he accepted OT. I very much disagree that this is the only way to reject the argument. In fact, all four positions seem quite possible:
Accept OT, accept the argument: sure, AIs can have different goals, but this (starting an AI without explicit goals) is how you get an AI which would figure out the meaning of life.
Reject OT, reject the argument: you can think “figure out the meaning of life” is not a possible AI goal.
and 4. EY’s positions at different times.
In addition, OT can itself be a reason to charge ahead with creating an AGI: since it says an AGI can have any goal, you “just” need to create an AGI which will improve the world. It says nothing about setting an AGI’s goal being difficult.