Fair enough. :) Yes, from the fact that probability * utility is small, we can’t tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven’t heard good arguments against assigning it non-negligible probability of success, and I also know that many people don’t bite Pascalian wagers at least partly because they don’t like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn’t so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say “any probability less than 0.01 is set to 0” (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don’t agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, “If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn’t completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don’t normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there’s bigger latent epistemic divergence when we’re distant from the lamp post.
“If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what’s going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they’re best at, and so don’t try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
For reasons that I outline in this comment, I think that the world’s elites will do a good job of navigating AI risk. Working on AI risk is in part fungible, and I believe that the effect size is significant.
If I understand correctly, Peter Thiel has argued that the biggest x-risk comes from the possibility that if economic growth halts, then we’ll shift from a positive sum situation to a zero sum situation, which will erode prosocial behavior, which could give rise to negative feedback loop that leads to societal collapse. We’ve already used lots of natural resources, and so might not be able to recover from a societal collapse. Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand. This increases the expected value of pushing on economic growth relative to AI risk reduction.
More generally, there are lots of X for which there’s a small probability that X is the limiting factor to a space-faring civilization. For example, maybe gold is necessary for building spacecrafts that can travel from earth to places with more resources, so that the limiting factor to a spacefaring civilization is gold, and the number one priority should be preventing gold from being depleted. I think that this is very unlikely: I’m only giving one example. Note that pushing on economic growth reduces the probability that gold will be depleted before it’s too late, so that: I think that this is true for many values of X. If this is so the prima facie reaction “but if gold is the limiting factor, then one should pursue more direct interventions than pushing on economic growth” loses force, because pushing on economic growth has a uniform positive impact over different values of X.
I give a case for near term helping (excluding ripple effects) potentially having astronomical benefits comparable to those of AI risk reduction in this comment.
An additional consideration that buttresses the above point is that as you’ve argued, the future may have negative expected value. Even if this looks unlikely, it increases the value of near-term helping relative to AI risk reduction, and since near-term helping might have astronomical benefits comparable to those of AI risk reduction, it increases the value by a nontrivial amount.
Viewing all of these things in juxtaposition, I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community)
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian.
Yes, this was my point.
You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
Definitely. I wouldn’t claim otherwise.
I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people’s psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
Yes, this was my point.
I was acknowledging it :-)
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Elite common sense says that voting is important for altruistic reasons. It’s not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it’s not clear that people have bounded utility functions. (For what it’s worth, I no longer consider myself to have a bounded utility function.)
People’s moral intuitions do deviate from utilitarianism, e.g. probably most people don’t subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
I’ve been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I’ve ever encountered. I would not be surprised if he’s studied the possibility of stagnation and societal collapse in more detail than Carl has.
Elite common sense says that voting is important for altruistic reasons.
This is because they’re deontologists, not because they’re consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there’s more overlap between deontology and consequentialism than meets the eye.)
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
It may be best to examine on a case-by-case basis. We don’t need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there’s enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
I Googled the question and found similar responses in this article:
One reason that people often offer for voting is “But what if everybody thought that way?” [...]
Another reason for voting, offered by political scientists and lay individuals alike, is that it is a civic duty of every citizen in a democratic country to vote in elections. It’s not about trying to affect the electoral outcome; it’s about doing your duty as a democratic citizen by voting in elections.
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
I believe that this pattern is quite general. Our intuitions are not utilitarian,
and as a result it is often possible to devise cases in which our intuitions conflict
with utilitarianism. But at the same time, our intuitions are somewhat constrained
by utilitarianism. This is because we care about utilitarian outcomes, and when a
practice is terribly anti-utilitarian, there is, sooner or later, a voice in favor of
abolishing it, even if the voice is not explicitly utilitarian. Take the case of drunk
driving. Drinking is okay. Driving is okay. Doing both at the same time isn’t such
an obviously horrible thing to do, but we’ve learned the hard way that this
intuitively innocuous, even fun, activity is tremendously damaging. And now,
having moralized the issue with the help of organizations like Mothers Against
Drunk Driving—what better moral authority than Mom?—we are prepared to
impose very stiff penalties on people who aren’t really “bad people,” people with no general anti-social tendencies. We punish drunk driving and related offenses
in a way that appears (or once appeared) disproportionately harsh because
we’ve paid the utilitarian costs of not doing so.39 The same might be said of
harsh penalties applied to wartime deserters and draft-dodgers. The disposition
to avoid situations in which one must kill people and risk being killed is not such
an awful disposition to have, morally speaking, and what could be a greater
violation of your “rights” than your government’s sending you, an innocent
person, off to die against your will?40 Nevertheless we are willing to punish
people severely, as severely as we would punish violent criminals, for acting on
that reasonable and humane disposition when the utilitarian stakes are
sufficiently high.41
The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Explicitly yes, but implicitly...?
Just ask people why they vote,
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren’t familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn’t make a difference in expectation, so maybe most people fall into the bucket of those who haven’t really thought about the matter rigorously at all. (Remember, we’re including English and Art majors here.)
You could say, “If they knew the arguments, they would be persuaded,” which may be true, but that doesn’t explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing).
“It’s a civic duty” is deontological if anything is, because deontology is duty-based ethics.
“If everyone didn’t vote, things would be bad” is an application of Kant’s categorical imperative.
“Our forefathers died for this, so we shouldn’t waste it” is not deontological—just the sunk-cost fallacy.
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
At some point it may become a debate about the teleological level at which you assess their “reasons.” As individuals, it’s very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It’s similar to assessing the “reason” why a mother cares for her child. At an individual / neural level it’s based on reward circuitry. At a broader evolutionary level, it’s based on bequeathing genes.
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn’t demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I’m not sure I understood your comment. Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
Yes, specifically similar with respect to decision theory implementation.
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate?
Yes, it applies among (some of) that class of humans.
I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
You’re assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don’t do it much at all. Typically a statement like “any probability less than 0.01 is I set to 0” really means “I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences”. Pointing out that they don’t actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of “I guess my derivation isn’t quite right” and lead them to revise the statement, but it’s not a reason why they should change their preferences in the cases that they originally derived the statement from.
Yep, that’s right. In my top-level comment, I said, “In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.” Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.
Fair enough. :) Yes, from the fact that probability * utility is small, we can’t tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven’t heard good arguments against assigning it non-negligible probability of success, and I also know that many people don’t bite Pascalian wagers at least partly because they don’t like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn’t so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say “any probability less than 0.01 is set to 0” (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don’t agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, “If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn’t completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don’t normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there’s bigger latent epistemic divergence when we’re distant from the lamp post.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what’s going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they’re best at, and so don’t try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
For reasons that I outline in this comment, I think that the world’s elites will do a good job of navigating AI risk. Working on AI risk is in part fungible, and I believe that the effect size is significant.
If I understand correctly, Peter Thiel has argued that the biggest x-risk comes from the possibility that if economic growth halts, then we’ll shift from a positive sum situation to a zero sum situation, which will erode prosocial behavior, which could give rise to negative feedback loop that leads to societal collapse. We’ve already used lots of natural resources, and so might not be able to recover from a societal collapse. Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand. This increases the expected value of pushing on economic growth relative to AI risk reduction.
More generally, there are lots of X for which there’s a small probability that X is the limiting factor to a space-faring civilization. For example, maybe gold is necessary for building spacecrafts that can travel from earth to places with more resources, so that the limiting factor to a spacefaring civilization is gold, and the number one priority should be preventing gold from being depleted. I think that this is very unlikely: I’m only giving one example. Note that pushing on economic growth reduces the probability that gold will be depleted before it’s too late, so that: I think that this is true for many values of X. If this is so the prima facie reaction “but if gold is the limiting factor, then one should pursue more direct interventions than pushing on economic growth” loses force, because pushing on economic growth has a uniform positive impact over different values of X.
I give a case for near term helping (excluding ripple effects) potentially having astronomical benefits comparable to those of AI risk reduction in this comment.
An additional consideration that buttresses the above point is that as you’ve argued, the future may have negative expected value. Even if this looks unlikely, it increases the value of near-term helping relative to AI risk reduction, and since near-term helping might have astronomical benefits comparable to those of AI risk reduction, it increases the value by a nontrivial amount.
Viewing all of these things in juxtaposition, I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
Thanks, Jonah. :)
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
Yes, this was my point.
Definitely. I wouldn’t claim otherwise.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people’s psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
I was acknowledging it :-)
Elite common sense says that voting is important for altruistic reasons. It’s not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it’s not clear that people have bounded utility functions. (For what it’s worth, I no longer consider myself to have a bounded utility function.)
People’s moral intuitions do deviate from utilitarianism, e.g. probably most people don’t subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
I’ve been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I’ve ever encountered. I would not be surprised if he’s studied the possibility of stagnation and societal collapse in more detail than Carl has.
This is because they’re deontologists, not because they’re consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there’s more overlap between deontology and consequentialism than meets the eye.)
It may be best to examine on a case-by-case basis. We don’t need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there’s enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Points taken on the other issues we discussed.
How do you know this? Do you think that these people would describe their reason for voting as deontological?
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
I Googled the question and found similar responses in this article:
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
Explicitly yes, but implicitly...?
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren’t familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn’t make a difference in expectation, so maybe most people fall into the bucket of those who haven’t really thought about the matter rigorously at all. (Remember, we’re including English and Art majors here.)
You could say, “If they knew the arguments, they would be persuaded,” which may be true, but that doesn’t explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
“It’s a civic duty” is deontological if anything is, because deontology is duty-based ethics.
“If everyone didn’t vote, things would be bad” is an application of Kant’s categorical imperative.
“Our forefathers died for this, so we shouldn’t waste it” is not deontological—just the sunk-cost fallacy.
At some point it may become a debate about the teleological level at which you assess their “reasons.” As individuals, it’s very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It’s similar to assessing the “reason” why a mother cares for her child. At an individual / neural level it’s based on reward circuitry. At a broader evolutionary level, it’s based on bequeathing genes.
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn’t demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I’m not sure I understood your comment. Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Yes, specifically similar with respect to decision theory implementation.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Yes, it applies among (some of) that class of humans.
Yes.
You’re assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don’t do it much at all. Typically a statement like “any probability less than 0.01 is I set to 0” really means “I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences”. Pointing out that they don’t actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of “I guess my derivation isn’t quite right” and lead them to revise the statement, but it’s not a reason why they should change their preferences in the cases that they originally derived the statement from.
Yep, that’s right. In my top-level comment, I said, “In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.” Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.