On the “spock” front, I dislike the identification of “rational” with “Inhuman”. These, too, are human qualities! However I certainly agree that many people do see this negatively.
There’s an interesting tension in marketing plans—how far can we go in using marketing, which is normally about exploiting irrational responses, in pushing rationality?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
The local jargon term appears to be “dark arts”.
The tricky thing is that it’s hard to effectively interact with the typical not-particularly-rational human in a manner that someone, somewhere, couldn’t conceivably interpret as dark arts.
I tend to resolve this by doing something that seems to have a reasonable chance of working, not actively seeking to deceive and seeking a win-win outcome. Would the subject feel socially ripped-off? If no, then fine. (This heuristic is somewhat inchoate and may not stand up to detailed examination, which I would welcome.)
Dunno about detailed examination, but will you settle for equally inchoate thoughts?
If I think about how N independent perfectly rational AI agents might communicate about the world, if they all had the intention of cooperating in a shared enterprise of learning as much as they can about it… one approach is for each agent to upload all their observations to a well-indexed central repository, and for each agent to periodically download all novel observations and then update on that.
They might also upload their inferences, in order to save one another the trouble of computing them… basically a performance optimization.
And they might have a mechanism for callibrating their inference engines… that is, agents A1 and A2 might periodically ensure that they are drawing the same conclusions from the same data, and engage in some diagnostic/repair work if not.
So that’s more or less my understanding of communication on the “light side of the Force:” share well-indexed data, avoid double-counting evidence, share the results of computationally expensive inferences (clearly labeled as such), and compare the inference process and point out discrepancies to support self-diagnostics and repair.
Humans don’t come anywhere near being able to do that, of course. But we can treat that as an ideal, and ask how well we are approximating it.
One obvious divergence from that ideal is that we’re dealing with other humans, who are not only just as flawed as we are, but are sometimes not even playing the same game: they may be actively distorting their transmissions in order to manipulate our behavior in various ways.
So right away, one thing I have to do is build models of other agents and estimate how they are likely to distort their output, and then apply correction algorithms to my human-generated inputs accordingly. And since they’re all doing the same thing, I have to model their likely models of me, and adjust my output to compensate for their distortions (aka corrections) of it.
So before either of us even opens our mouths, we are already two levels deep into a duel of the dark arts. The question is, how far am I willing to go?
In general, I draw my lines based on goals, not tactics.
What am I trying to accomplish? If I’m trying to understand someone, or be understood, or make progress towards a goal they value, or act in their interests, I’m generally cool with that. If I’m acting against their interests, I’m not so cool with that. If I’m trying to protect myself from damage (including social damage) or advance my own interests, I’m generally cool with that. These factors are sometimes in mutual opposition.
And then multiply that pairwise computation by the mutual interactions of all the other people we know, plus some dogs I really like, and approximate ruthlessly because I don’t have a hope of doing that matrix computation.
One doesn’t have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn’t respond.
Look at the argument of SarahC’s original post itself. It isn’t a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a “reasoning process” of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn’t inconsistent with good moral character.
While one doesn’t want to undermine one’s own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as “rational.” Simple anecdote can show that these stereotypes aren’t always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn’t enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don’t establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about “exploiting irrational responses.” It is about something much more basic—using words that will convey to the intended audience the message that you actually intend to convey. Care in one’s use of words so as to avoid confusion in one’s audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.
On the “spock” front, I dislike the identification of “rational” with “Inhuman”. These, too, are human qualities! However I certainly agree that many people do see this negatively.
There’s an interesting tension in marketing plans—how far can we go in using marketing, which is normally about exploiting irrational responses, in pushing rationality?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
The local jargon term appears to be “dark arts”.
The tricky thing is that it’s hard to effectively interact with the typical not-particularly-rational human in a manner that someone, somewhere, couldn’t conceivably interpret as dark arts.
I tend to resolve this by doing something that seems to have a reasonable chance of working, not actively seeking to deceive and seeking a win-win outcome. Would the subject feel socially ripped-off? If no, then fine. (This heuristic is somewhat inchoate and may not stand up to detailed examination, which I would welcome.)
Dunno about detailed examination, but will you settle for equally inchoate thoughts?
If I think about how N independent perfectly rational AI agents might communicate about the world, if they all had the intention of cooperating in a shared enterprise of learning as much as they can about it… one approach is for each agent to upload all their observations to a well-indexed central repository, and for each agent to periodically download all novel observations and then update on that.
They might also upload their inferences, in order to save one another the trouble of computing them… basically a performance optimization.
And they might have a mechanism for callibrating their inference engines… that is, agents A1 and A2 might periodically ensure that they are drawing the same conclusions from the same data, and engage in some diagnostic/repair work if not.
So that’s more or less my understanding of communication on the “light side of the Force:” share well-indexed data, avoid double-counting evidence, share the results of computationally expensive inferences (clearly labeled as such), and compare the inference process and point out discrepancies to support self-diagnostics and repair.
Humans don’t come anywhere near being able to do that, of course. But we can treat that as an ideal, and ask how well we are approximating it.
One obvious divergence from that ideal is that we’re dealing with other humans, who are not only just as flawed as we are, but are sometimes not even playing the same game: they may be actively distorting their transmissions in order to manipulate our behavior in various ways.
So right away, one thing I have to do is build models of other agents and estimate how they are likely to distort their output, and then apply correction algorithms to my human-generated inputs accordingly. And since they’re all doing the same thing, I have to model their likely models of me, and adjust my output to compensate for their distortions (aka corrections) of it.
So before either of us even opens our mouths, we are already two levels deep into a duel of the dark arts. The question is, how far am I willing to go?
In general, I draw my lines based on goals, not tactics.
What am I trying to accomplish? If I’m trying to understand someone, or be understood, or make progress towards a goal they value, or act in their interests, I’m generally cool with that. If I’m acting against their interests, I’m not so cool with that. If I’m trying to protect myself from damage (including social damage) or advance my own interests, I’m generally cool with that. These factors are sometimes in mutual opposition.
And then multiply that pairwise computation by the mutual interactions of all the other people we know, plus some dogs I really like, and approximate ruthlessly because I don’t have a hope of doing that matrix computation.
One doesn’t have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn’t respond.
Look at the argument of SarahC’s original post itself. It isn’t a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a “reasoning process” of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn’t inconsistent with good moral character.
While one doesn’t want to undermine one’s own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as “rational.” Simple anecdote can show that these stereotypes aren’t always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn’t enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don’t establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about “exploiting irrational responses.” It is about something much more basic—using words that will convey to the intended audience the message that you actually intend to convey. Care in one’s use of words so as to avoid confusion in one’s audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.