Your link seems to address only a restricted case of the random mind space argument, where an AI is given a correctly specified goal but insufficient constraints on its behavior wrt resources. The randomness is not in what it principally values (e.g., paperclips) but in what else it values. A complete counterargument should address the case where, say, we try to create a paperclip maximizer and end up creating a staple maximizer.
Your link seems to address only a restricted case of the random mind space argument, where an AI is given a correctly specified goal but insufficient constraints on its behavior wrt resources. The randomness is not in what it principally values (e.g., paperclips) but in what else it values. A complete counterargument should address the case where, say, we try to create a paperclip maximizer and end up creating a staple maximizer.