Rationalists should shouldn’t deny themselves the utility of rhetoric. Any rational rationalist can see that rhetoric is the path to winning, a kind of social theatre that lubricates decision-making with irrational or intermittently rational groups. If a group needs to be convinced of a position within a finite amount of time, bare reasoning isn’t always the best option.
Maybe that is too Machiavellian to be “really” rational, but it is the winning path.
I think I am using “rhetoric” in a different way than Aristotle. For Aristotle, it was the art of speaking clearly and eloquently to communicate a position. I am using it more in the way people use when they say “empty rhetoric” or “political rhetoric”. “Unless you give up your rights, the terrorists have already won” is my idea of an archetypal rhetorical technique. That may not be fair to the field of rhetoric, but I need some word to describe it and I can’t think of a better one, so “rhetoric” it is.
Rhetoric is a technique that may be useful to rationalists, but it’s not a rationalist technique. Compare the use of force. I may, as a rationalist, decide the best way towards my goal is murdering all who oppose me, in which case I’ll want to know techniques like how to use an assault weapon. But there’s still something fundamentally shady about the technique of killing people; it may just barely be justified on utilitarian grounds for a sufficiently important goal, but it’s one of those things that you use only as a last resort and even then only after agonizing soul-searching. I feel confident saying that the technique of murdering people effectively as a Dark Art.
I feel the same way about rhetoric (by my pessimistic definition). Tricking people into believing things they have no legitimate evidence for can certainly be helpful, but the more people do it the worse the world gets. Not only do people end up with less than maximally accurate beliefs, but every rhetorician needs to promote Dark Side Epistemology in order to keep zir job. And if I use rhetoric, you need to start using rhetoric just to keep up, and sooner or later everyone’s beliefs are completely skewed and inaccurate. It’s not quite as Dark an Art as force is, and it’s much easier to justify, but it’s in the same category.
Be careful about using the “rationalists should win” slogan too literally. Martial artists should win too, but that doesn’t mean they should take an AK-47 to their next sparring match and blowing their opponent’s face off. Martial artists place high value on winning honorably. I see no reason why we shouldn’t emulate them.
Be careful about using the “rationalists should win” slogan too literally. Martial artists should win too, but that doesn’t mean they should take an AK-47 to their next sparring match and blowing their opponent’s face off. Martial artists place high value on winning honorably. I see no reason why we shouldn’t emulate them.
I disagree. The problem with using dishonest rethoric to win in a debate isn’t that it’s winning dishonorably; it’s that it’s winning at the wrong game—on a game that you wouldn’t consider the most important if you looked at it closely.
To continue with the martial arts analogy, imagine say a Chinese kung fu master in World War 2 Nanjing that knows that Japanese soldiers are coming over to kill off all of his family. Should he try to win the fight honorably? Or just try to win using every dirty trick in the book (including running away)? If he focuses on winning honorably, he’s lost sight of his main goal (save his family) in favor of a secondary one (win honorably).
Similarly, if you foxus on “winning the debate”, and as a result push people into a corner that will make them dislike you and become more attached to their identity as a believer in whatever—you focused on the wrong subgoal, and lost at the one which was important to you.
I’m a precedent utilitarian. I try to maximize utility, except when doing so would set a bad precedent that would lower utility later.
Precedent utilitarians are usually good about restraining from force. Yes, killing a rich miser and distributing her money to the poor might increase utility. But it sets the precedent that anyone can kill someone if they think of a good enough reason, and most people won’t be smart enough to limit themselves to genuinely good reasons. Therefore, precedent utilitarians generally respect the rule of not killing others. But in certain cases this rule breaks down. In the WWII example you mention, it doesn’t seem particularly dangerous to set the precedent that you can use force against invaders coming to kill your family.
I try to use the same thought process when evaluating when to use rhetoric. If anyone can use rhetoric any time it furthers a goal that they consider genuinely good, then there’s little incentive to use rational argument except on the rare hard-core rationalists who are mostly resistant to rhetorical tricks. I want to be able to condemn a demagogue who uses rhetoric without being a hypocrite. If I needed to use rhetoric in a situation where I couldn’t blame anyone else for using rhetoric, like trying to save my family, I’d do it.
(the problem with precedent utilitarianism is that the calculations are impossible to do with real math, and mostly just involve handwaving. But I hope it at least gives a sketch of my thought processes)
Yvain: “I’m a precedent utilitarian. I try to maximize utility, except when doing so would set a bad precedent that would lower utility later.”
I think this is an odd thing to say. Any utilitarian ought to be declining short-term gains that result in long-term losses. So why the need for this specific disclaimer?
Yvain seems to be using the term to mean a utilitarian (in the pure sense) who scrupulously considers the force of his example. The implication is that many don’t—we’re not talking about perfectly rational beings here, just people who agree with the principle of utility maximization.
Um, isn’t it kind of rhetorical to compare rhetoric to force and murder?
Also, all your articles here that I recall—likewise those of Eliezer on Overcoming Bias—are masterful applications of rhetoric. So I’m kind of confused here. Is this one of those “do as I say, not as I do” things?
If you mean the articles here are clear or well argued, thank you. I have no objection to clarity or good argument; see the first paragraph of the comment above. If you mean that I’m using dirty tricks like the “terrorists win” example, then I’d like to know exactly what you mean so I can avoid doing it in the future.
When I compare rhetoric (meaning “empty rhetoric”, as mentioned) to force and murder, I’m not saying they’re equally bad, or doing one leads to the other or anything like that. Just that they’re bad for the same reason. Both are potentially “useful” techniques. But both prevent rational argument and if used too frequently lead to a world in which rational argument is impossible.
But that is precisely the sort of “dirty trick” you claim to be against. By using murder as an example, you’re setting off a “boo light” (opposite of applause light) and linking it to the thing you want people to dislike. That’s rhetoric, and emotional manipulation.
And it’s neither a good thing nor a bad thing, in itself. Used to strengthen a valid argument, it’s fine. Arguing that it’s bad in and of itself is a misunderstanding… and another “boo light” (e.g. “empty rhetoric”, “dirty tricks”).
Emotional manipulation is unavoidable, by the way. Boring presenters and neutral presentations are just manipulating people’s emotions either towards boredom and not caring, or to “respect”, “status”, and “seriousness”, depending on the audience. It’s best to deliberately choose what emotions you want to create, in whom, rather than leaving the matter to chance.
If you mean the articles here are clear or well argued, thank you. I have no objection to clarity or good argument; see the first paragraph of the comment above. If you mean that I’m using dirty tricks like the “terrorists win” example, then I’d like to know exactly what you mean so I can avoid doing it in the future.
I think the point is that you do a little of both; loosely speaking you are guilty of being fairly eloquent—presenting your ideas persuasively and engagingly, in a style that is inherently likely to increase acceptance.
It is an unavoidable facet of human communication that the same idea can be more or less persuasive depending on how it is presented. Over on OB, Robin uses a far more neutral (or at times even anti-persuasive) style, and if memory serves me he and Eliezer have argued a bit about such use of style.
We’re running up against the equivocation at the core of this community, between rationalists as people who make optimal plays versus rationalists as people who love truth and hate lies.
rationalists as people who make optimal plays versus rationalists as people who love truth and hate lies
It’s only possible for us to systematically make optimal plays IF we have a sufficient grasp of truth. There’s only an equivocation in the minds of people who don’t understand that one goal is a necessary precursor for the other.
rationalists as people who make optimal plays versus rationalists as people who love truth and hate lies
It’s only possible for us to systematically make optimal plays IF we have a sufficient grasp of truth. There’s only an equivocation in the minds of people who don’t understand that one goal is a necessary precursor for the other.
No, I think there is an equivocation here, though that’s probably because of the term “people who love truth and hate lies” instead of “epistemic rationalist”.
An epistemic rationalist wants to know truth and to eliminate lies from their mind. An instrumental rationalist wants to win, and one precursor to winning is to know truth and to eliminate lies from one’s own mind.
However, someone who “loves truth and hates lies” doesn’t merely want their own mind to filled with truth. They want for all minds in the universe to be filled with truth and for lies to be eliminated from all minds. This can be an impediment to “winning” if there are competing minds.
Actually, I think “Rationalists should WIN” regardless of what their goals are, even if that includes social wrestling matches.
The “should” here is not intended to be moral prescriptivism. I’m not saying in an morally/ethically ideal world, rationalists would win. Instead, I’m using “should” to help define what the word “Rationalist” means. If some person is a rationalist, then given equal opportunity, resources, difficult-of-goal, etc., they will on average, probabilistically win more often than someone who was not a rationalist. And if they happen to be an evil rationalist, well that sucks for the rest of the universe, but that’s still what “rationalist” means.
I believe this definitional-sense of “should” is also what the originator of the “Rationalists should WIN” quote intended.
People who win are not necessarily rationalists. A person who is a rationalist is more likely to win than a person who is not.
Consider someone who just happens to win the lottery vs someone who figures out what actions have the highest expected net profit.
Edit: That said, careful not to succumb to http://rationalwiki.org/wiki/Argument_from_consequences maybe Genghis Khan really was one of the greatest rationalists ever. I’ve never met the guy nor read any of his writings, so I wouldn’t know.
Even ignoring the issue that “rationalist” is not a binary variable, I don’t know how in practice will you be able to tell whether someone is a rationalist or not. Your definition depends on counterfactuals and without them you can’t disentangle rationalism and luck.
I assume that you accept the claim that it is possible to define what a fair coin is, and thus what an unfair coin is.
If we observe some coin, at first, it may be difficult to tell if it’s a fair coin or not. Perhaps the coin comes from a very trustworthy friend who assures you that it’s fair. Maybe it’s specifically being sold in a novelty store and labelled as an “unfair coin” and you’ve made many purchases from this store in the past and have never been disappointed. In other words, you have some “prior” probability belief that the coin is fair (or not fair).
As you see the coin flip, you can keep track of its outcomes, and adjust your belief. You can ask yourself “Given the outcomes I’ve seen, is it more likely that the coin is fair? or unfair?” and update accordingly.
I think the same applies for rationalist here. I meet someone new. Eliezer vouches for her as being very rational. I observe her sometimes winning, sometimes not winning. I expend mental effort and try to judge how easy/difficult her situation was and how much effort/skill/rationality/luck/whatever it would have taken her to win in that situation. I try to analyze how it came about that she won when she won, or lost when she lost. I try to dismiss evidence where luck was a big factor. She bought a lottery ticket, and she won. Should I update towards her being a rationalist or not? She switched doors in Monty Hall, but she ended up with a goat. Should I update towards her being a rationalist or not? Etc.
Hm, OK. So you are saying that the degree of rationalism is an unobservable (hidden) variable and what we can observe (winning or losing) is contaminated by noise (luck). That’s a fair way of framing it.
The interesting question then becomes what kind of accuracy can you achieve in the real world given that the noise level are high, information available to you is limited, and your perception is imperfect (e.g. it’s not uncommon to interpret non-obvious high skill as luck).
Right, I suspect just having heard about someone’s accomplishments would be an extremely noisy indicator. You’d want to know what they were thinking, for example by reading their blog posts.
Eliezer seems pretty rational, given his writings. But if he repeatedly lost in situations where other people tend to win, I’d update accordingly.
Possibly he’s just extremely lucky. There are seven billion people in the world—one of these people is almost certain to be luckier than all of the rest.
Possibly he is being looked after by a far more competent person behind the scenes; a spouse or a parent, perhaps, who dislikes being visible but works to help that person succeed.
Possibly that person really is more rational than you are, but his methods of success are so alien to you that your first instinct is to reject them out-of-hand.
Possibly his “writings” are actually being ghost-written by someone else.
Possibly he doesn’t much care about what he writes, going for low-effort writing in order to concentrate on winning.
Possibly he’s found one exploit that really works but won’t work if everyone does it; thus, he keeps quiet about it.
Possibly he’s deliberately writing to obscure or hide his own methods of success.
Possibly he’s found a winning strategy, but he doesn’t understand why it works, and thus invents a completely implausible “explanation” for it.
Possibly that person really is more rational than you are, but his methods of success are so alien to you that your first instinct is to reject them out-of-hand.
If I understand the Peter Thiel doctrine of the secret correlectly that should be the case in many instances.
Some people are rich and can afford valuable things even if they don’t spend their money wisely. Some people might win because they have a lot of resources or connections to throw at problems.
I can’t directly observe Eliezer winning or losing, but I can make (perhaps very weak) inferences about how often he wins/loses given his writing.
As an analogy, I might not have the opportunity to play a given videogame ABC against a given blogger XYZ that I’ve never met and will never meet. But if I read his blog posts on ABC strategies, and try to apply them when I play ABC, and find that my win-rate vastly improves, I can infer that XYZ also probably wins often (and probably wins more often than I do).
Well, if what you want to accomplish is motivating large groups of people into supporting you and using them to conquer a large empire, you should study what they did and how they did it.
The inventors of the original form of rationalist virtue AND rhetoric sure didn’t think that the latter was a dark art. Rationalists should WIN!
Rationalists should shouldn’t deny themselves the utility of rhetoric. Any rational rationalist can see that rhetoric is the path to winning, a kind of social theatre that lubricates decision-making with irrational or intermittently rational groups. If a group needs to be convinced of a position within a finite amount of time, bare reasoning isn’t always the best option.
Maybe that is too Machiavellian to be “really” rational, but it is the winning path.
I think I am using “rhetoric” in a different way than Aristotle. For Aristotle, it was the art of speaking clearly and eloquently to communicate a position. I am using it more in the way people use when they say “empty rhetoric” or “political rhetoric”. “Unless you give up your rights, the terrorists have already won” is my idea of an archetypal rhetorical technique. That may not be fair to the field of rhetoric, but I need some word to describe it and I can’t think of a better one, so “rhetoric” it is.
Rhetoric is a technique that may be useful to rationalists, but it’s not a rationalist technique. Compare the use of force. I may, as a rationalist, decide the best way towards my goal is murdering all who oppose me, in which case I’ll want to know techniques like how to use an assault weapon. But there’s still something fundamentally shady about the technique of killing people; it may just barely be justified on utilitarian grounds for a sufficiently important goal, but it’s one of those things that you use only as a last resort and even then only after agonizing soul-searching. I feel confident saying that the technique of murdering people effectively as a Dark Art.
I feel the same way about rhetoric (by my pessimistic definition). Tricking people into believing things they have no legitimate evidence for can certainly be helpful, but the more people do it the worse the world gets. Not only do people end up with less than maximally accurate beliefs, but every rhetorician needs to promote Dark Side Epistemology in order to keep zir job. And if I use rhetoric, you need to start using rhetoric just to keep up, and sooner or later everyone’s beliefs are completely skewed and inaccurate. It’s not quite as Dark an Art as force is, and it’s much easier to justify, but it’s in the same category.
Be careful about using the “rationalists should win” slogan too literally. Martial artists should win too, but that doesn’t mean they should take an AK-47 to their next sparring match and blowing their opponent’s face off. Martial artists place high value on winning honorably. I see no reason why we shouldn’t emulate them.
I disagree. The problem with using dishonest rethoric to win in a debate isn’t that it’s winning dishonorably; it’s that it’s winning at the wrong game—on a game that you wouldn’t consider the most important if you looked at it closely.
To continue with the martial arts analogy, imagine say a Chinese kung fu master in World War 2 Nanjing that knows that Japanese soldiers are coming over to kill off all of his family. Should he try to win the fight honorably? Or just try to win using every dirty trick in the book (including running away)? If he focuses on winning honorably, he’s lost sight of his main goal (save his family) in favor of a secondary one (win honorably).
Similarly, if you foxus on “winning the debate”, and as a result push people into a corner that will make them dislike you and become more attached to their identity as a believer in whatever—you focused on the wrong subgoal, and lost at the one which was important to you.
I’m a precedent utilitarian. I try to maximize utility, except when doing so would set a bad precedent that would lower utility later.
Precedent utilitarians are usually good about restraining from force. Yes, killing a rich miser and distributing her money to the poor might increase utility. But it sets the precedent that anyone can kill someone if they think of a good enough reason, and most people won’t be smart enough to limit themselves to genuinely good reasons. Therefore, precedent utilitarians generally respect the rule of not killing others. But in certain cases this rule breaks down. In the WWII example you mention, it doesn’t seem particularly dangerous to set the precedent that you can use force against invaders coming to kill your family.
I try to use the same thought process when evaluating when to use rhetoric. If anyone can use rhetoric any time it furthers a goal that they consider genuinely good, then there’s little incentive to use rational argument except on the rare hard-core rationalists who are mostly resistant to rhetorical tricks. I want to be able to condemn a demagogue who uses rhetoric without being a hypocrite. If I needed to use rhetoric in a situation where I couldn’t blame anyone else for using rhetoric, like trying to save my family, I’d do it.
(the problem with precedent utilitarianism is that the calculations are impossible to do with real math, and mostly just involve handwaving. But I hope it at least gives a sketch of my thought processes)
Yvain: “I’m a precedent utilitarian. I try to maximize utility, except when doing so would set a bad precedent that would lower utility later.”
I think this is an odd thing to say. Any utilitarian ought to be declining short-term gains that result in long-term losses. So why the need for this specific disclaimer?
Yvain seems to be using the term to mean a utilitarian (in the pure sense) who scrupulously considers the force of his example. The implication is that many don’t—we’re not talking about perfectly rational beings here, just people who agree with the principle of utility maximization.
Um, isn’t it kind of rhetorical to compare rhetoric to force and murder?
Also, all your articles here that I recall—likewise those of Eliezer on Overcoming Bias—are masterful applications of rhetoric. So I’m kind of confused here. Is this one of those “do as I say, not as I do” things?
If you mean the articles here are clear or well argued, thank you. I have no objection to clarity or good argument; see the first paragraph of the comment above. If you mean that I’m using dirty tricks like the “terrorists win” example, then I’d like to know exactly what you mean so I can avoid doing it in the future.
When I compare rhetoric (meaning “empty rhetoric”, as mentioned) to force and murder, I’m not saying they’re equally bad, or doing one leads to the other or anything like that. Just that they’re bad for the same reason. Both are potentially “useful” techniques. But both prevent rational argument and if used too frequently lead to a world in which rational argument is impossible.
But that is precisely the sort of “dirty trick” you claim to be against. By using murder as an example, you’re setting off a “boo light” (opposite of applause light) and linking it to the thing you want people to dislike. That’s rhetoric, and emotional manipulation.
And it’s neither a good thing nor a bad thing, in itself. Used to strengthen a valid argument, it’s fine. Arguing that it’s bad in and of itself is a misunderstanding… and another “boo light” (e.g. “empty rhetoric”, “dirty tricks”).
Emotional manipulation is unavoidable, by the way. Boring presenters and neutral presentations are just manipulating people’s emotions either towards boredom and not caring, or to “respect”, “status”, and “seriousness”, depending on the audience. It’s best to deliberately choose what emotions you want to create, in whom, rather than leaving the matter to chance.
I think the point is that you do a little of both; loosely speaking you are guilty of being fairly eloquent—presenting your ideas persuasively and engagingly, in a style that is inherently likely to increase acceptance.
It is an unavoidable facet of human communication that the same idea can be more or less persuasive depending on how it is presented. Over on OB, Robin uses a far more neutral (or at times even anti-persuasive) style, and if memory serves me he and Eliezer have argued a bit about such use of style.
Except, of course, for all those aspects of martial arts which we shouldn’t emulate.
We’re running up against the equivocation at the core of this community, between rationalists as people who make optimal plays versus rationalists as people who love truth and hate lies.
rationalists as people who make optimal plays versus rationalists as people who love truth and hate lies
It’s only possible for us to systematically make optimal plays IF we have a sufficient grasp of truth. There’s only an equivocation in the minds of people who don’t understand that one goal is a necessary precursor for the other.
No, I think there is an equivocation here, though that’s probably because of the term “people who love truth and hate lies” instead of “epistemic rationalist”.
An epistemic rationalist wants to know truth and to eliminate lies from their mind. An instrumental rationalist wants to win, and one precursor to winning is to know truth and to eliminate lies from one’s own mind.
However, someone who “loves truth and hates lies” doesn’t merely want their own mind to filled with truth. They want for all minds in the universe to be filled with truth and for lies to be eliminated from all minds. This can be an impediment to “winning” if there are competing minds.
Rationalists should WIN!
Rationalists have better definitions of “winning”. They don’t necessarily include triumphing in social wrestling matches.
Actually, I think “Rationalists should WIN” regardless of what their goals are, even if that includes social wrestling matches.
The “should” here is not intended to be moral prescriptivism. I’m not saying in an morally/ethically ideal world, rationalists would win. Instead, I’m using “should” to help define what the word “Rationalist” means. If some person is a rationalist, then given equal opportunity, resources, difficult-of-goal, etc., they will on average, probabilistically win more often than someone who was not a rationalist. And if they happen to be an evil rationalist, well that sucks for the rest of the universe, but that’s still what “rationalist” means.
I believe this definitional-sense of “should” is also what the originator of the “Rationalists should WIN” quote intended.
There is a bit of a problem here in that the list of the greatest rationalists ever will be headed by people like Genghis Khan and Prophet Muhammad.
People who win are not necessarily rationalists. A person who is a rationalist is more likely to win than a person who is not.
Consider someone who just happens to win the lottery vs someone who figures out what actions have the highest expected net profit.
Edit: That said, careful not to succumb to http://rationalwiki.org/wiki/Argument_from_consequences maybe Genghis Khan really was one of the greatest rationalists ever. I’ve never met the guy nor read any of his writings, so I wouldn’t know.
Even ignoring the issue that “rationalist” is not a binary variable, I don’t know how in practice will you be able to tell whether someone is a rationalist or not. Your definition depends on counterfactuals and without them you can’t disentangle rationalism and luck.
I assume that you accept the claim that it is possible to define what a fair coin is, and thus what an unfair coin is.
If we observe some coin, at first, it may be difficult to tell if it’s a fair coin or not. Perhaps the coin comes from a very trustworthy friend who assures you that it’s fair. Maybe it’s specifically being sold in a novelty store and labelled as an “unfair coin” and you’ve made many purchases from this store in the past and have never been disappointed. In other words, you have some “prior” probability belief that the coin is fair (or not fair).
As you see the coin flip, you can keep track of its outcomes, and adjust your belief. You can ask yourself “Given the outcomes I’ve seen, is it more likely that the coin is fair? or unfair?” and update accordingly.
I think the same applies for rationalist here. I meet someone new. Eliezer vouches for her as being very rational. I observe her sometimes winning, sometimes not winning. I expend mental effort and try to judge how easy/difficult her situation was and how much effort/skill/rationality/luck/whatever it would have taken her to win in that situation. I try to analyze how it came about that she won when she won, or lost when she lost. I try to dismiss evidence where luck was a big factor. She bought a lottery ticket, and she won. Should I update towards her being a rationalist or not? She switched doors in Monty Hall, but she ended up with a goat. Should I update towards her being a rationalist or not? Etc.
Hm, OK. So you are saying that the degree of rationalism is an unobservable (hidden) variable and what we can observe (winning or losing) is contaminated by noise (luck). That’s a fair way of framing it.
The interesting question then becomes what kind of accuracy can you achieve in the real world given that the noise level are high, information available to you is limited, and your perception is imperfect (e.g. it’s not uncommon to interpret non-obvious high skill as luck).
Right, I suspect just having heard about someone’s accomplishments would be an extremely noisy indicator. You’d want to know what they were thinking, for example by reading their blog posts.
Eliezer seems pretty rational, given his writings. But if he repeatedly lost in situations where other people tend to win, I’d update accordingly.
But what about the other case? People who don’t seem rational given their writings but who repeatedly win?
Possibly he’s just extremely lucky. There are seven billion people in the world—one of these people is almost certain to be luckier than all of the rest.
Possibly he is being looked after by a far more competent person behind the scenes; a spouse or a parent, perhaps, who dislikes being visible but works to help that person succeed.
Possibly that person really is more rational than you are, but his methods of success are so alien to you that your first instinct is to reject them out-of-hand.
Possibly his “writings” are actually being ghost-written by someone else.
Possibly he doesn’t much care about what he writes, going for low-effort writing in order to concentrate on winning.
Possibly he’s found one exploit that really works but won’t work if everyone does it; thus, he keeps quiet about it.
Possibly he’s deliberately writing to obscure or hide his own methods of success.
Possibly he’s found a winning strategy, but he doesn’t understand why it works, and thus invents a completely implausible “explanation” for it.
...have I missed anything?
If I understand the Peter Thiel doctrine of the secret correlectly that should be the case in many instances.
Some people are rich and can afford valuable things even if they don’t spend their money wisely. Some people might win because they have a lot of resources or connections to throw at problems.
If you define rationality as winning, why does it matter what his writings seem like?
I can’t directly observe Eliezer winning or losing, but I can make (perhaps very weak) inferences about how often he wins/loses given his writing.
As an analogy, I might not have the opportunity to play a given videogame ABC against a given blogger XYZ that I’ve never met and will never meet. But if I read his blog posts on ABC strategies, and try to apply them when I play ABC, and find that my win-rate vastly improves, I can infer that XYZ also probably wins often (and probably wins more often than I do).
Well, if what you want to accomplish is motivating large groups of people into supporting you and using them to conquer a large empire, you should study what they did and how they did it.
Now that you mention it, I actually don’t.