Great post, Nick! I agree with most of what you say, although there are times when I don’t always demonstrate this in practice. Your post is what I would consider a good “motivational speech”—an eloquent defense of something you agree with but could use reminding of on occasion.
It’s good to get outside one’s intellectual bubble, even a bubble as fascinating and sophisticated as LessWrong. Even on the seemingly most obvious of questions, we could be making logical mistakes.
I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) “elites.”
It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don’t seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don’t bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I’m more concerned with my very immediate emotional reaction, then there’s less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I’m not so gung ho about what my extrapolated selves would think on some questions.)
Finally, as you point out, it can be useful to make contrarian points for the purpose of intellectual progress. Most startups, experiments, and new theories fail, and you’re more likely to be right by sticking with conventional wisdom than betting on something new. Yet if no one tried new things and pushed the envelope, we’d have an epistemic “tragedy of the commons” where everyone tries to make her own views more accurate at the cost of slowing the overall intellectual progress of society. That said, we can sometimes explore weird ideas without actually betting on them when the stakes are high, although sometimes (as in the case of startups), you do have to take on high risks. Maybe there would be fewer startups if the founders were more sober-minded in their assessment of their odds of success.
I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) “elites.”
This is a question which it seems I wasn’t sufficiently clear about. I count someone as an “expert on X” roughly when they are someone that a broad coalition of trustworthy people would defer to on questions about X. As I explained in another comment, if you don’t know about what the experts on X think, I recommend trying to find out what the experts think (if it’s easy/important enough) and going with what the broad coalition of trustworthy people thinks until then. So it may be that some non-elite garbage guys are experts on garbage collection, and a broad coalition of trustworthy people would defer to them on questions of garbage collection, once the broad coalition of trustworthy people knows about what these people think about garbage collection.
Why focus on people who are regarded as most trustworthy by many people? I think those people are likely to be more trustworthy than ordinary people, as I tried to suggest in my quick Quora experiment.
Cool—that makes sense. In principle, would you still count everyone with some (possibly very small) weight, the way PageRank does? (Or maybe negative weight in a few cases.) A binary separation between elites and non-elites seems hacky, though of course, it may in fact be best to do so in practice to make the analysis tractable. Cutting out part of the sample also leads to a biased estimator, but maybe that’s not such a big deal in most cases if the weight on the remaining part was small anyway. You could also give different weight among the elites. Basically, elites vs. non-elites is a binary approximation of a more continuous weighting distribution. Anyway, it may be misleading to think of this as purely a weighted sample of opinion, because (a) you want to reduce the weight of beliefs that are copied from each other and (b) you may want to harmonize the beliefs in a way that’s different from blind averaging. Also, as you suggested, (c) you may want to dampen outliers to avoid pulling the average too much toward the outlier.
This sounds roughly right to me. Note that there’s are two different things you really want to know about people:
(1) What they believe on the matter;
(2) Who they think is trustworthy on the matter.
Often it seems that (2) is more important, even when you’re looking at people who are deemed trustworthy. If I have a question about lung disease, most people will not have much idea to (1), and recommend doctors for (2). Most doctors will have some idea, and recommend specialists for (2). Specialists are likely to have a pretty good idea for (1), and recommend the top people in their field for (2). These are the people you really want to listen to for (1), if you can, but regular people would not tend to know who they were.
I’m not sure exactly how you should be weighting (1) against (2), but the principle of using both, and following through chains to at least some degree, feels natural.
Yeah, it’s hard to say whether the weights would be negative. As an extreme case, if there was someone who wanted to cause as much suffering as possible, then if that person was really smart, we might gain insight into how to reduce suffering by flipping around the policies he advocated. If someone wants you to get a perfect zero score on a binary multiple-choice test, you can get a perfect score by flipping the answers. These cases are rare, though. Even the hypothetical suffering maximizer still has many correct beliefs, e.g., that you need to breathe air to stay alive.
I agree that in principle, you don’t want some discontinuous distinction between elites and non-elites. I also agree with your points (a) - (c). Something like PageRank seems good to me, though of course I would want to be tentative about the details.
In practice, my suspicion is that most of what’s relevant here comes from the very elite people’s thinking, so that not much is lost by just focusing on their opinions. But I hold this view pretty tentatively. I presented the ideas the way I did partly because of this hunch and partly for ease of exposition.
Nick, what do you do about the Pope getting extremely high PageRank by your measure? You could say that most people who trust his judgment aren’t elites themselves, but some certainly are (e.g., heads of state, CEOs, celebrities). Every president in US history has given very high credence to the moral teachings of Jesus, and some have even given high credence to his factual teachings. Hitler had very high PageRank during the 1930s, though I guess he doesn’t now, and you could say that any algorithm makes mistakes some of the time.
ETA: I guess you did say in your post that we should be less reliant on elite common sense in areas like religion and politics where rationality is less prized. But I feel like a similar thing could be said to some extent of debates about moral conclusions. The cleanest area of application for elite common-sense is with respect to verifiable factual claims.
I don’t have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke’s Muslim theology case here.
One thing I’d say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I’d admit that many others are not. Examples of good ones include:
Love your neighbor as yourself (I’d translate this as “treat others as you would like to be treated”)
Focus on identifying and managing your own personal weaknesses rather than criticizing others for their weaknesses
Prioritize helping poor and disenfranchised people
Don’t let your acts of charity be motivated by finding approval from others
These are all drawn from Jesus’s Sermon on the Mount, which is arguably his most celebrated set of moral teachings.
Good points. Of course, depending on the Pope in question, you also have teachings like the sinfulness of homosexuality, the evil of birth control, and the righteousness of God in torturing nonbelievers forever. Many people place more weight on these beliefs than they do on those of liberal/scientific elites.
It seems like you’re going to get clusters of authority sentiment. Educated people will place high authority on impressive intellectuals, business people, etc. Conservative religious people will tend to place high authority on church leaders, religious founders, etc. and very low authority on scientists, at least when it comes to metaphysical questions rather than what medicine to take for an ailment. (Though there are plenty of skeptics of traditional medicine too.) What makes the world of Catholic elites different from the world of scientific elites? I mean, some people think the Pope is a stronger authority on God than anyone thinks the smartest scientist is about physics.
For example, the reason most elites don’t seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don’t bite Pascalian bullets.
How do you know this? It’s true that their utility functions aren’t linear, but it doesn’t follow that that’s why they don’t take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.)
Dovetailing from my comment above, I think that there’s a risk of following the line of thought “I’m doing X because it fulfills certain values that I have. Other people don’t have these values. So the fact that they don’t engage in X, and don’t think that doing X is a good idea, isn’t evidence against X being a good idea for me” without considering the possibility that despite the fact that they don’t have your values, doing X or something analogous to X would fulfill their (different) values conditioning on your factual beliefs being right, so that the fact that they don’t do or endorse X is evidence against your factual beliefs connected with X. In a given instance, there will be a subtle judgment call as to how much weight to give to this possibility, but I think that it should always be considered.
Fair enough. :) Yes, from the fact that probability * utility is small, we can’t tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven’t heard good arguments against assigning it non-negligible probability of success, and I also know that many people don’t bite Pascalian wagers at least partly because they don’t like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn’t so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say “any probability less than 0.01 is set to 0” (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don’t agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, “If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn’t completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don’t normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there’s bigger latent epistemic divergence when we’re distant from the lamp post.
“If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what’s going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they’re best at, and so don’t try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
For reasons that I outline in this comment, I think that the world’s elites will do a good job of navigating AI risk. Working on AI risk is in part fungible, and I believe that the effect size is significant.
If I understand correctly, Peter Thiel has argued that the biggest x-risk comes from the possibility that if economic growth halts, then we’ll shift from a positive sum situation to a zero sum situation, which will erode prosocial behavior, which could give rise to negative feedback loop that leads to societal collapse. We’ve already used lots of natural resources, and so might not be able to recover from a societal collapse. Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand. This increases the expected value of pushing on economic growth relative to AI risk reduction.
More generally, there are lots of X for which there’s a small probability that X is the limiting factor to a space-faring civilization. For example, maybe gold is necessary for building spacecrafts that can travel from earth to places with more resources, so that the limiting factor to a spacefaring civilization is gold, and the number one priority should be preventing gold from being depleted. I think that this is very unlikely: I’m only giving one example. Note that pushing on economic growth reduces the probability that gold will be depleted before it’s too late, so that: I think that this is true for many values of X. If this is so the prima facie reaction “but if gold is the limiting factor, then one should pursue more direct interventions than pushing on economic growth” loses force, because pushing on economic growth has a uniform positive impact over different values of X.
I give a case for near term helping (excluding ripple effects) potentially having astronomical benefits comparable to those of AI risk reduction in this comment.
An additional consideration that buttresses the above point is that as you’ve argued, the future may have negative expected value. Even if this looks unlikely, it increases the value of near-term helping relative to AI risk reduction, and since near-term helping might have astronomical benefits comparable to those of AI risk reduction, it increases the value by a nontrivial amount.
Viewing all of these things in juxtaposition, I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community)
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian.
Yes, this was my point.
You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
Definitely. I wouldn’t claim otherwise.
I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people’s psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
Yes, this was my point.
I was acknowledging it :-)
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Elite common sense says that voting is important for altruistic reasons. It’s not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it’s not clear that people have bounded utility functions. (For what it’s worth, I no longer consider myself to have a bounded utility function.)
People’s moral intuitions do deviate from utilitarianism, e.g. probably most people don’t subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
I’ve been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I’ve ever encountered. I would not be surprised if he’s studied the possibility of stagnation and societal collapse in more detail than Carl has.
Elite common sense says that voting is important for altruistic reasons.
This is because they’re deontologists, not because they’re consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there’s more overlap between deontology and consequentialism than meets the eye.)
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
It may be best to examine on a case-by-case basis. We don’t need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there’s enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
I Googled the question and found similar responses in this article:
One reason that people often offer for voting is “But what if everybody thought that way?” [...]
Another reason for voting, offered by political scientists and lay individuals alike, is that it is a civic duty of every citizen in a democratic country to vote in elections. It’s not about trying to affect the electoral outcome; it’s about doing your duty as a democratic citizen by voting in elections.
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
I believe that this pattern is quite general. Our intuitions are not utilitarian,
and as a result it is often possible to devise cases in which our intuitions conflict
with utilitarianism. But at the same time, our intuitions are somewhat constrained
by utilitarianism. This is because we care about utilitarian outcomes, and when a
practice is terribly anti-utilitarian, there is, sooner or later, a voice in favor of
abolishing it, even if the voice is not explicitly utilitarian. Take the case of drunk
driving. Drinking is okay. Driving is okay. Doing both at the same time isn’t such
an obviously horrible thing to do, but we’ve learned the hard way that this
intuitively innocuous, even fun, activity is tremendously damaging. And now,
having moralized the issue with the help of organizations like Mothers Against
Drunk Driving—what better moral authority than Mom?—we are prepared to
impose very stiff penalties on people who aren’t really “bad people,” people with no general anti-social tendencies. We punish drunk driving and related offenses
in a way that appears (or once appeared) disproportionately harsh because
we’ve paid the utilitarian costs of not doing so.39 The same might be said of
harsh penalties applied to wartime deserters and draft-dodgers. The disposition
to avoid situations in which one must kill people and risk being killed is not such
an awful disposition to have, morally speaking, and what could be a greater
violation of your “rights” than your government’s sending you, an innocent
person, off to die against your will?40 Nevertheless we are willing to punish
people severely, as severely as we would punish violent criminals, for acting on
that reasonable and humane disposition when the utilitarian stakes are
sufficiently high.41
The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Explicitly yes, but implicitly...?
Just ask people why they vote,
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren’t familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn’t make a difference in expectation, so maybe most people fall into the bucket of those who haven’t really thought about the matter rigorously at all. (Remember, we’re including English and Art majors here.)
You could say, “If they knew the arguments, they would be persuaded,” which may be true, but that doesn’t explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing).
“It’s a civic duty” is deontological if anything is, because deontology is duty-based ethics.
“If everyone didn’t vote, things would be bad” is an application of Kant’s categorical imperative.
“Our forefathers died for this, so we shouldn’t waste it” is not deontological—just the sunk-cost fallacy.
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
At some point it may become a debate about the teleological level at which you assess their “reasons.” As individuals, it’s very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It’s similar to assessing the “reason” why a mother cares for her child. At an individual / neural level it’s based on reward circuitry. At a broader evolutionary level, it’s based on bequeathing genes.
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn’t demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I’m not sure I understood your comment. Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
Yes, specifically similar with respect to decision theory implementation.
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate?
Yes, it applies among (some of) that class of humans.
I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
You’re assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don’t do it much at all. Typically a statement like “any probability less than 0.01 is I set to 0” really means “I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences”. Pointing out that they don’t actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of “I guess my derivation isn’t quite right” and lead them to revise the statement, but it’s not a reason why they should change their preferences in the cases that they originally derived the statement from.
Yep, that’s right. In my top-level comment, I said, “In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.” Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.
This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Unfortunately, it’s a mixed case: there were motives besides pure altruism/self-interest. For example, Edward Teller was an advocate of asteroid defense… no doubt in part because it was a great excuse for using atomic bombs and keeping space and laser-related research going.
How do you know this? It’s true that their utility functions aren’t linear, but it doesn’t follow that that’s why they don’t take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
It’s pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.
It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don’t seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don’t bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.
I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell. In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments. I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.
I also endorse Jonah’s point about some people caring about what you care about, but for different reasons. Just as we are weird, there can be other people who are weird in different ways that make them obsessed with the things we’re obsessed with for totally different reasons. Just as some scientists are obsessed with random stuff like dung beetles, I think a lot of asteroids were tracked because there were some scientists who are really obsessed with asteroids in particular, and want to ensure that all asteroids are carefully tracked far beyond the regular value that normal people place on tracking all the asteroids. I think this can include some borderline Pascalian issues. For example, important government agencies that care about speculative threats to national security. Dick Cheney famously said, “If there’s a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response.” Similarly, there can be people that are obsessed with many issues far out of proportion with what most ordinary people care about. Looking at what “most people” care about is less robust a way to find gaps in a market than it can appear at first. (I know you don’t think it would be good to save the world, but I think the example still illustrates the point to some extent. An example more relevant to would be that some scientists might just be really interested in insects and do a lot of the research that you’d think would be valuable, even though if you had just thought “no one cares about insects so this research will never happen” you’d be wrong.)
I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell.
As far as the GiveWell point, I meant “proper Pascalian bullets” where the probabilities are computed after constraining by some reasonable priors (keeping in mind that a normal distribution with mean 0 and variance 1 is not a reasonable prior in general).
In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments.
Low probability, yes, but not necessarily low probability*impact.
I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.
As I mentioned in another comment, I think most Pascalian wagers that one comes across are fallacious because they miss even bigger Pascalian wagers that should be pursued instead. However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like “the Overwhelming Importance of Shaping the Far Future.” My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don’t have linear utility functions and/or don’t like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?
This highlights a meta-point in this discussion: Often what’s under debate here is not the framework but instead claims about (1) whether elites would or would not agree with a given position upon hearing it defended and (2) whether their sustained disagreement even after hearing it defended results from divergent facts, values, or methodologies (e.g., not being consequentialist). It can take time to assess these, so in the short term, disagreements about what elites would come to believe are a main bottleneck for using elite common sense to reach conclusions.
However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like “the Overwhelming Importance of Shaping the Far Future.” My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don’t have linear utility functions and/or don’t like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?
I disagree with the claim that the argument for shaping the far future is a Pascalian wager. In my opinion, there is a reasonably high, reasonably non-idiosyncratic probability that humanity will survive for a very long time, that there will be a lot of future people, and/or that future people will have a very high quality of life. Though I have not yet defended this claim as well as I would like, I also believe that many conventionally good things people can do push toward future generations facing future challenges and opportunities better than they otherwise would, which with a high enough and conventional enough probability makes the future go better. I think that these are claims which elite common sense would be convinced of, if in possession of my evidence. If elite common sense would not be so convinced, I would consider abandoning these assumptions.
Regarding the more purely moral claims, I suspect there are a wide variety of considerations which elite common sense would give weight to, and that very long-term considerations are one time of important consideration which would get weight according to elite common sense. It may also be, in part, a fundamental difference of values, where I am part of a not-too-small contingent of people who have distinctive concerns. However, in genuinely altruistic contexts, I think many people would give these considerations substantially more weight if they thought about the issue carefully.
Near the beginning of my dissertation, I actually speak about the level of confidence I have in my thesis quite tentatively:
How convinced should you be by the arguments I’m going to give? I’m defending an unconventional thesis and my support for that thesis comes from highly speculative arguments. I don’t have great confidence in my thesis, or claim that others should. But I am convinced that it could well be true, that the vast majority of thoughtful people give the claim less credence that they should, and that it is worth thinking about more carefully. I aim to make the reader justified in taking a similar attitude. (p. 3, Beckstead 2013)
I disagree with the claim that the argument for shaping the far future is a Pascalian wager.
I thought some of our disagreement might stem from understanding what each other meant, and that seems to have been true here. Even if the probability of humanity surviving a long time is large, there remain entropy in our influence and butterfly effects, such that it seems extremely unlikely that what we do now will actually make a pivotal difference in the long term, and we could easily be getting the sign wrong. This makes the probabilities small enough to seem Pascalian for most people.
It’s very common for people to say, “Predictions are hard, especially about the future, so let’s focus on the short term where we can be more confident we’re at least making a small positive impact.”
It’s very common for people to say, “Predictions are hard, especially about the future, so let’s focus on the short term where we can be more confident we’re at least making a small positive impact.”
If by short-term you mean “what happens in the next 100 years or so,” I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the distant future. But I believe that the best way deal with this challenge is to empower humanity to deal with the relatively foreseeable and unforeseeable challenges and opportunities that it will face over the next few generations. This doesn’t mean “let’s just look only at short-run well-being boosts,” but something more like “let’s broadly improve cooperation, motives, access to certain types of information, narrow and broad technological capabilities, and intelligence and rationality to deal with the problems we can’t foresee, and let’s rely on the best evidence we can to prepare for the problems we can foresee.” I say a few things about this issue here. I hope to say more about it in the future.
An analogy would be that if you were a 5-year-old kid and you primarily cared about how successful you were later in life, you should focus on self-improvement activities (like developing good habits, gaining knowledge, and learning how to interact with other people) and health and safety issues (like getting adequate nutrition, not getting hit by cars, not poisoning yourself, not falling off of tall objects, and not eating lead-based paint). You should not try to anticipate fine-grained challenges in the labor market when you graduate from college or disputes you might have with your spouse. I realize that this analogy may not be compelling, but perhaps it illuminates my perspective.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I’m more concerned with my very immediate emotional reaction, then there’s less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I’m not so gung ho about what my extrapolated selves would think on some questions.)
As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.
ETA: I’d also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I’ve never heard someone argue that it’s a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me.
I’m a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions, and I suspect we don’t really know what we’re asking when we asking meta-ethical questions either. As far as I can tell, you’ve picked one possible candidate thing we could be asking—”what do I care about right now?”—among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that’s what you’re asking.
Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this.
I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren’t good at thinking about.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on.
I think it’s fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions
Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: “Within 20 seconds of arguing about the definition of ‘desire’, someone will say, ‘Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.’”
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I should point out that “doing what I feel like doing” doesn’t necessarily mean running roughshod over other people’s values. I think it’s generally better to seek compromise and remain friendly to those with whom you want to cooperate. It’s just that this is an instrumental concession, not because I actually agree with the values that I’m willing to be nice to.
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don’t think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
Fair enough. :) I’ll buy that way of putting it.
Anyway, if I were really as unreasonable as it sounds, I wouldn’t be talking here and putting at risk the preservation of my current goals.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik’s present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
an unusual approach to dealing with moral questions
Why do you think it’s unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are “intuitive”—they go by gut feeling, basically. I think that’s the normal mode in which most of humanity operates most of the time.
I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian’s perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.
I think other people are significantly more responsive to values disagreements
That’s a pretty meaningless statement without specifying which values. How responsive “other people” would be to value disagreements about child pornography, for example, do you think?
I suspect Nick would say that if there were respected elites who favored increasing the amount of child pornography, he would give some weight to the possibility that such a position was in fact something he would come to agree with upon further reflection.
Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours.
Or, most likely of all, it’s because I don’t care to justify it. If you want to call “not wanting to justify a stance” a bias or blind spot, I’m ok with that.
Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
Great post, Nick! I agree with most of what you say, although there are times when I don’t always demonstrate this in practice. Your post is what I would consider a good “motivational speech”—an eloquent defense of something you agree with but could use reminding of on occasion.
It’s good to get outside one’s intellectual bubble, even a bubble as fascinating and sophisticated as LessWrong. Even on the seemingly most obvious of questions, we could be making logical mistakes.
I think the focus on only intellectual elites has unclear grounding. Is the reason because elites think most seriously about the questions that you care about most? On a question of which kind of truck was most suitable for garbage collection, you would defer to a different class of people. In such a case, I guess you would regard them as the (question-dependent) “elites.”
It can be murky to infer what people believe based on actions or commitments, because this mixes two quantities: Probabilities and values. For example, the reason most elites don’t seem to take seriously efforts like shaping trajectories for strong AI is not because they think the probabilities of making a difference are astronomically small but because they don’t bite Pascalian bullets. Their utility functions are not linear. If your utility function is linear, this is a reason that your actions (if not your beliefs) will diverge from those of most elites. In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.
Insofar as my own actions are atypical, I intend for it to result from atypical moral beliefs rather than atypical factual beliefs. (If you can think of instances of clearly atypical factual beliefs on my part, let me know.) Of course, you could claim, elite common sense should apply also as a prior to what my own moral beliefs actually are, given the fallibility of introspection. This is true, but its importance depends on how abstractly I view my own moral values. If I ask questions about what an extrapolated Brian would think upon learning more, having more experiences, etc., then the elite prior has a lot to say on this question. But if I’m more concerned with my very immediate emotional reaction, then there’s less room for error and less that the common-sense prior has to say. The fact that my moral values are sometimes not strongly affected by common-sense moral values comes from my favoring immediate emotions rather than what (one of many possible) extrapolated Brians would feel upon having further and different life experiences. (Of course, there are many possible further life experiences I could have, which would push me in lots of random directions. This is why I’m not so gung ho about what my extrapolated selves would think on some questions.)
Finally, as you point out, it can be useful to make contrarian points for the purpose of intellectual progress. Most startups, experiments, and new theories fail, and you’re more likely to be right by sticking with conventional wisdom than betting on something new. Yet if no one tried new things and pushed the envelope, we’d have an epistemic “tragedy of the commons” where everyone tries to make her own views more accurate at the cost of slowing the overall intellectual progress of society. That said, we can sometimes explore weird ideas without actually betting on them when the stakes are high, although sometimes (as in the case of startups), you do have to take on high risks. Maybe there would be fewer startups if the founders were more sober-minded in their assessment of their odds of success.
This is a question which it seems I wasn’t sufficiently clear about. I count someone as an “expert on X” roughly when they are someone that a broad coalition of trustworthy people would defer to on questions about X. As I explained in another comment, if you don’t know about what the experts on X think, I recommend trying to find out what the experts think (if it’s easy/important enough) and going with what the broad coalition of trustworthy people thinks until then. So it may be that some non-elite garbage guys are experts on garbage collection, and a broad coalition of trustworthy people would defer to them on questions of garbage collection, once the broad coalition of trustworthy people knows about what these people think about garbage collection.
Why focus on people who are regarded as most trustworthy by many people? I think those people are likely to be more trustworthy than ordinary people, as I tried to suggest in my quick Quora experiment.
Cool—that makes sense. In principle, would you still count everyone with some (possibly very small) weight, the way PageRank does? (Or maybe negative weight in a few cases.) A binary separation between elites and non-elites seems hacky, though of course, it may in fact be best to do so in practice to make the analysis tractable. Cutting out part of the sample also leads to a biased estimator, but maybe that’s not such a big deal in most cases if the weight on the remaining part was small anyway. You could also give different weight among the elites. Basically, elites vs. non-elites is a binary approximation of a more continuous weighting distribution. Anyway, it may be misleading to think of this as purely a weighted sample of opinion, because (a) you want to reduce the weight of beliefs that are copied from each other and (b) you may want to harmonize the beliefs in a way that’s different from blind averaging. Also, as you suggested, (c) you may want to dampen outliers to avoid pulling the average too much toward the outlier.
This sounds roughly right to me. Note that there’s are two different things you really want to know about people:
(1) What they believe on the matter;
(2) Who they think is trustworthy on the matter.
Often it seems that (2) is more important, even when you’re looking at people who are deemed trustworthy. If I have a question about lung disease, most people will not have much idea to (1), and recommend doctors for (2). Most doctors will have some idea, and recommend specialists for (2). Specialists are likely to have a pretty good idea for (1), and recommend the top people in their field for (2). These are the people you really want to listen to for (1), if you can, but regular people would not tend to know who they were.
I’m not sure exactly how you should be weighting (1) against (2), but the principle of using both, and following through chains to at least some degree, feels natural.
Probably not.
Yeah, it’s hard to say whether the weights would be negative. As an extreme case, if there was someone who wanted to cause as much suffering as possible, then if that person was really smart, we might gain insight into how to reduce suffering by flipping around the policies he advocated. If someone wants you to get a perfect zero score on a binary multiple-choice test, you can get a perfect score by flipping the answers. These cases are rare, though. Even the hypothetical suffering maximizer still has many correct beliefs, e.g., that you need to breathe air to stay alive.
I agree that in principle, you don’t want some discontinuous distinction between elites and non-elites. I also agree with your points (a) - (c). Something like PageRank seems good to me, though of course I would want to be tentative about the details.
In practice, my suspicion is that most of what’s relevant here comes from the very elite people’s thinking, so that not much is lost by just focusing on their opinions. But I hold this view pretty tentatively. I presented the ideas the way I did partly because of this hunch and partly for ease of exposition.
Nick, what do you do about the Pope getting extremely high PageRank by your measure? You could say that most people who trust his judgment aren’t elites themselves, but some certainly are (e.g., heads of state, CEOs, celebrities). Every president in US history has given very high credence to the moral teachings of Jesus, and some have even given high credence to his factual teachings. Hitler had very high PageRank during the 1930s, though I guess he doesn’t now, and you could say that any algorithm makes mistakes some of the time.
ETA: I guess you did say in your post that we should be less reliant on elite common sense in areas like religion and politics where rationality is less prized. But I feel like a similar thing could be said to some extent of debates about moral conclusions. The cleanest area of application for elite common-sense is with respect to verifiable factual claims.
I don’t have a lot to add to my comments on religious authorities, apart from what I said in the post and what I said in response to Luke’s Muslim theology case here.
One thing I’d say is that many of the Christian moral teachings that are most celebrated are actually pretty good, though I’d admit that many others are not. Examples of good ones include:
Love your neighbor as yourself (I’d translate this as “treat others as you would like to be treated”)
Focus on identifying and managing your own personal weaknesses rather than criticizing others for their weaknesses
Prioritize helping poor and disenfranchised people
Don’t let your acts of charity be motivated by finding approval from others
These are all drawn from Jesus’s Sermon on the Mount, which is arguably his most celebrated set of moral teachings.
Good points. Of course, depending on the Pope in question, you also have teachings like the sinfulness of homosexuality, the evil of birth control, and the righteousness of God in torturing nonbelievers forever. Many people place more weight on these beliefs than they do on those of liberal/scientific elites.
It seems like you’re going to get clusters of authority sentiment. Educated people will place high authority on impressive intellectuals, business people, etc. Conservative religious people will tend to place high authority on church leaders, religious founders, etc. and very low authority on scientists, at least when it comes to metaphysical questions rather than what medicine to take for an ailment. (Though there are plenty of skeptics of traditional medicine too.) What makes the world of Catholic elites different from the world of scientific elites? I mean, some people think the Pope is a stronger authority on God than anyone thinks the smartest scientist is about physics.
Hi Brian :-)
How do you know this? It’s true that their utility functions aren’t linear, but it doesn’t follow that that’s why they don’t take such efforts seriously. Near-Earth Objects: Finding Them Before They Find Us reports on concerted efforts to prevent extinction-level asteroids from colliding into earth. This shows that people are (sometimes) willing to act on small probabilities of human extinction.
Dovetailing from my comment above, I think that there’s a risk of following the line of thought “I’m doing X because it fulfills certain values that I have. Other people don’t have these values. So the fact that they don’t engage in X, and don’t think that doing X is a good idea, isn’t evidence against X being a good idea for me” without considering the possibility that despite the fact that they don’t have your values, doing X or something analogous to X would fulfill their (different) values conditioning on your factual beliefs being right, so that the fact that they don’t do or endorse X is evidence against your factual beliefs connected with X. In a given instance, there will be a subtle judgment call as to how much weight to give to this possibility, but I think that it should always be considered.
Fair enough. :) Yes, from the fact that probability * utility is small, we can’t tell whether the probability is small or the utility is, or both. In the case of shaping AI specifically, I haven’t heard good arguments against assigning it non-negligible probability of success, and I also know that many people don’t bite Pascalian wagers at least partly because they don’t like Pascalian wagers rather than because they disagree with the premises, so combining these suggests the probability side isn’t so much the issue, but this suggestion stands to be verified. Also, people will often feign having ridiculously small probabilities to get out of Pascalian wagers, but they usually make these proclamations after the fact, or else are the kind of people who say “any probability less than 0.01 is set to 0” (except when wearing seat belts to protect against a car accident or something, highlighting what Nick said about people potentially being more rational for important near-range decisions).
Anyway, not accepting a Pascalian wager does not mean you don’t agree with the probability and utility estimates; maybe you think the wager is missing the forest for the trees and ignoring bigger-picture issues. I think most Pascalian wagers can be defused by saying, “If that were true, this other thing would be even more important, so you should focus on that other thing instead.” But then you should actually focus on that other thing instead rather than focusing on neither, which most people tend to do. :P
You are also correct that differences in moral values doesn’t completely shield off an update to probabilities when I find my actions divergent from those of others. However, in cases when people do make their probabilities explicit, I don’t normally diverge substantially (or if I do, I tend to update somewhat), and in these particular cases, divergent values comprise the remainder of the gap (usually most of it). Of course, I may have already updated the most in those cases where people have made their probabilities explicit, so maybe there’s bigger latent epistemic divergence when we’re distant from the lamp post.
If you restrict yourself to thoughtful, intelligent people who care about having a big positive impact on global welfare (which is a group substantially larger than the EA community), I think that a large part of what’s going on is that people recognize that they have a substantial comparative advantage in a given domain, and think that they can have the biggest impact by doing what they’re best at, and so don’t try to optimize between causes. I think that their reasoning is a lot closer to the mark than initially meets the eye, for reasons that I gave in my posts Robustness of Cost-Effectiveness Estimates and Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited.
Of course, this is relative to more conventional values than utilitarianism, and so lots of their efforts go into things that aren’t utilitarian. But because of the number of people, and the diversity of comparative advantages, some of them will be working on problems that are utilitarian by chance, and will learn a lot about how best to address these problems. You may argue that the problems that they’re working on are different from the problems that you’re interested in addressing, but there may be strong analogies between the situations, and so their knowledge may be transferable.
As for people not working to shape AI, I think that that the utilitarian expected value of working to shape AI is lower than it may initially appear. Some points:
For reasons that I outline in this comment, I think that the world’s elites will do a good job of navigating AI risk. Working on AI risk is in part fungible, and I believe that the effect size is significant.
If I understand correctly, Peter Thiel has argued that the biggest x-risk comes from the possibility that if economic growth halts, then we’ll shift from a positive sum situation to a zero sum situation, which will erode prosocial behavior, which could give rise to negative feedback loop that leads to societal collapse. We’ve already used lots of natural resources, and so might not be able to recover from a societal collapse. Carl has argued against this, but Peter Thiel is very sophisticated and so his view can’t be dismissed out of hand. This increases the expected value of pushing on economic growth relative to AI risk reduction.
More generally, there are lots of X for which there’s a small probability that X is the limiting factor to a space-faring civilization. For example, maybe gold is necessary for building spacecrafts that can travel from earth to places with more resources, so that the limiting factor to a spacefaring civilization is gold, and the number one priority should be preventing gold from being depleted. I think that this is very unlikely: I’m only giving one example. Note that pushing on economic growth reduces the probability that gold will be depleted before it’s too late, so that: I think that this is true for many values of X. If this is so the prima facie reaction “but if gold is the limiting factor, then one should pursue more direct interventions than pushing on economic growth” loses force, because pushing on economic growth has a uniform positive impact over different values of X.
I give a case for near term helping (excluding ripple effects) potentially having astronomical benefits comparable to those of AI risk reduction in this comment.
An additional consideration that buttresses the above point is that as you’ve argued, the future may have negative expected value. Even if this looks unlikely, it increases the value of near-term helping relative to AI risk reduction, and since near-term helping might have astronomical benefits comparable to those of AI risk reduction, it increases the value by a nontrivial amount.
Viewing all of these things in juxtaposition, I wouldn’t take people’s low focus on AI risk reduction as very strong evidence that people don’t care about astronomical waste. See also my post Many Weak Arguments and the Typical Mind: the absence of an attempt to isolate the highest expected value activities may be adaptive rather than an indication of lack of seriousness of purpose.
Thanks, Jonah. :)
But it’s a smaller group than the set of elites used for the common-sense prior. Hence, many elites don’t share our values even by this basic measure.
Yes, this was my point.
Definitely. I wouldn’t claim otherwise.
In isolation, their not working on astronomical waste is not sufficient proof that their utility functions are not linear. However, combined with everything else I know about people’s psychology, it seems very plausible that they in fact don’t have linear utility functions.
Compare with behavioral economics. You can explain away any given discrepancy from classical microeconomic behavior by rational agents through an epicycle in the theory, but combined with all that we know about people’s psychology, we have reason to think that psychological biases themselves are playing a role in the deviations.
Not dismissed out of hand, but downweighted a fair amount. I think Carl is more likely to be right than Thiel on an arbitrary question where Carl has studied it and Thiel has not. Famous people are busy. Comments they make in an offhand way may be circulated in the media. Thiel has some good general intuition, sure, but his speculations on a given social trend don’t compare with more systematic research done by someone like Carl.
But a lot of the people within this group use an elite common-sense prior despite having disjoint values, which is a signal that the elite common-sense prior is right.
I was acknowledging it :-)
Elite common sense says that voting is important for altruistic reasons. It’s not clear that this is contingent on the number of people in America not being too big. One could imagine an intergalactic empire with 10^50 people where voting was considered important. So it’s not clear that people have bounded utility functions. (For what it’s worth, I no longer consider myself to have a bounded utility function.)
People’s moral intuitions do deviate from utilitarianism, e.g. probably most people don’t subscribe to the view that bringing a life into existence is equivalent to saving a life. But the ways in which their intuitions differ from utilitarianism may cancel each other out. For example, having read about climate change tail risk, I have the impression that climate change reduction advocates are often (in operational terms) valuing future people more than they value present people.
So I think it’s best to remain agnostic as to the degree to which variance in the humanitarian endeavors that people engage in is driven by variance in their values.
I’ve been extremely impressed by Peter Thiel based on reading notes on his course about startups. He has extremely broad and penetrating knowledge. He may have the highest crystalized intelligence of anybody who I’ve ever encountered. I would not be surprised if he’s studied the possibility of stagnation and societal collapse in more detail than Carl has.
This is because they’re deontologists, not because they’re consequentialists with a linear utility function. So rather than suggesting more similarity in values, it suggests less. (That said, there’s more overlap between deontology and consequentialism than meets the eye.)
It may be best to examine on a case-by-case basis. We don’t need to just look at what people are doing and make inferences; we can also look at other psychological hints about how they feel regarding a given issue. Nick did suggest giving greater weight to what people believe (or, in this case, what they do) than their stated reasons for those beliefs (or actions), but he acknowledges this recommendation is controversial (e.g., Ray Dalio disagrees), and on some issues it seems like there’s enough other information to outweigh whatever inferences we might draw from actions alone. For example, we know people tend to be irrational in the religious domain based on other facts and so can somewhat discount the observed behavior there.
Points taken on the other issues we discussed.
How do you know this? Do you think that these people would describe their reason for voting as deontological?
Oh, definitely. The consequentialist justification only happens in obscure corners of geekdom like LessWrong and stat / poli sci journals.
Just ask people why they vote, and most of them will say things like “It’s a civic duty,” “Our forefathers died for this, so we shouldn’t waste it,” “If everyone didn’t vote, things would be bad,” …
I Googled the question and found similar responses in this article:
Interestingly, the author also says: “Your decision to vote or not will not affect whether or not other people will vote (unless you are a highly influential person and you announce your voting intention to the world in advance of the election).” This may be mostly true in practice, but not in the limit as everyone approaches identity with you. It seems like this author is a two-boxer based on his statements. He calls timeless considerations “magical thinking.”
These views reflect the endorsements of various trusted political figures and groups, the active promotion of voting by those with more individual influence, and the raw observation of outcomes affected by bulk political behavior.
In other words, the common sense or deontological rules of thumb are shaped by the consequences, as the consequences drive moralizing activity. Joshua Greene has some cute discussion of this in his dissertation:
Explicitly yes, but implicitly...?
Do you have in mind average people, or, e.g., top 10% Ivy Leaguers … ?
These reasons aren’t obviously deontological (even though they might sound like they are on first hearing). As you say in your comment, timeless decision theory is relevant (transparently so in the last two of the three reasons that you cite).
Even if people did explicitly describe their reasons as deontological, one still wouldn’t know whether this was the case, because people’s stated reasons are often different from their actual reasons.
One would want to probe here to try to tell whether these things reflect terminal values or instrumental values.
Both. Remember that many Ivy Leaguers are liberal-arts majors. Even many that are quantitatively oriented I suspect aren’t familiar with the literature. I guess it takes a certain level of sophistication to think that voting doesn’t make a difference in expectation, so maybe most people fall into the bucket of those who haven’t really thought about the matter rigorously at all. (Remember, we’re including English and Art majors here.)
You could say, “If they knew the arguments, they would be persuaded,” which may be true, but that doesn’t explain why they already vote without knowing the arguments. Explaining that suggests deontology as a candidate hypothesis.
“It’s a civic duty” is deontological if anything is, because deontology is duty-based ethics.
“If everyone didn’t vote, things would be bad” is an application of Kant’s categorical imperative.
“Our forefathers died for this, so we shouldn’t waste it” is not deontological—just the sunk-cost fallacy.
At some point it may become a debate about the teleological level at which you assess their “reasons.” As individuals, it’s very likely the value of voting is terminal in some sense, based on cultural acclimation. Taking a broader view of why society itself developed this tendency, you might say that it did so for more consequentialist / instrumental reasons.
It’s similar to assessing the “reason” why a mother cares for her child. At an individual / neural level it’s based on reward circuitry. At a broader evolutionary level, it’s based on bequeathing genes.
The main point to my mind here is that apparently deontological beliefs may originate from a combination of consequentialist values with an implicit understanding of timeless decision theory.
He may also be a two boxer who thinks that one boxing is magical thinking. However this instance doesn’t demonstrate that. Acting as if other agents will conditionally cooperate when they in fact will not is an error. In fact, it will prompt actual timeless decision theorists to defect against you.
Thanks! I’m not sure I understood your comment. Did you mean that if the other agents aren’t similar enough to you, it’s an error to assume that your cooperating will cause them to cooperate?
I was drawing the inference about two-boxing from the fact that the author seemed to dismiss the possibility that what you do could possibly affect what others do in any circumstance.
Yes, specifically similar with respect to decision theory implementation.
He seems to be talking about humans as they exist. If (or when) he generalises to all agents he starts being wrong.
Even among humans, there’s something to timeless considerations, right? If you were in a real prisoner’s dilemma with someone you didn’t know but who was very similar to you and had read a lot of the same things, it seems plausible you should cooperate? I don’t claim the effect is strong enough to operate in the realm of voting most of the time, but theoretically timeless considerations can matter for less-than-perfect copies of yourself.
Yes, it applies among (some of) that class of humans.
Yes.
You’re assuming that people work by probabilities and Bayes each time. Nobody can do that for all of their beliefs, and many people don’t do it much at all. Typically a statement like “any probability less than 0.01 is I set to 0” really means “I have this set of preferences, but I think I can derive a statement about probabilities from that set of preferences”. Pointing out that they don’t actually ignore a probability of 0.01 when wearing a seatbelt, then, should lead to a response of “I guess my derivation isn’t quite right” and lead them to revise the statement, but it’s not a reason why they should change their preferences in the cases that they originally derived the statement from.
Yep, that’s right. In my top-level comment, I said, “In any event, many elites are not even systematic or consequentialist in translating utilities times probabilities into actions.” Still, on big government-policy questions that affect society (rather than personal actions, relationships, etc.) elites tend to be (relatively) more interested in utilitarian calculations.
Unfortunately, it’s a mixed case: there were motives besides pure altruism/self-interest. For example, Edward Teller was an advocate of asteroid defense… no doubt in part because it was a great excuse for using atomic bombs and keeping space and laser-related research going.
It’s pretty easy to accept the possibility that an asteroid impact could wipe out humanity, given that something very similar has happened before. You have to overcome a much larger inferential distance to explain the risks from an intelligence explosion.
I don’t endorse biting Pascalian bullets, in part for reasons argued in this post, which I think give further support to some considerations identified by GiveWell. In Pascalian cases, we have claims that people in general aren’t good at thinking about and which people generally assign low weight when they are acquainted with the arguments. I believe that Pascalian estimates of expected value that differ greatly from elite common sense and aren’t persuasive to elite common sense should be treated with great caution.
I also endorse Jonah’s point about some people caring about what you care about, but for different reasons. Just as we are weird, there can be other people who are weird in different ways that make them obsessed with the things we’re obsessed with for totally different reasons. Just as some scientists are obsessed with random stuff like dung beetles, I think a lot of asteroids were tracked because there were some scientists who are really obsessed with asteroids in particular, and want to ensure that all asteroids are carefully tracked far beyond the regular value that normal people place on tracking all the asteroids. I think this can include some borderline Pascalian issues. For example, important government agencies that care about speculative threats to national security. Dick Cheney famously said, “If there’s a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response.” Similarly, there can be people that are obsessed with many issues far out of proportion with what most ordinary people care about. Looking at what “most people” care about is less robust a way to find gaps in a market than it can appear at first. (I know you don’t think it would be good to save the world, but I think the example still illustrates the point to some extent. An example more relevant to would be that some scientists might just be really interested in insects and do a lot of the research that you’d think would be valuable, even though if you had just thought “no one cares about insects so this research will never happen” you’d be wrong.)
As far as the GiveWell point, I meant “proper Pascalian bullets” where the probabilities are computed after constraining by some reasonable priors (keeping in mind that a normal distribution with mean 0 and variance 1 is not a reasonable prior in general).
Low probability, yes, but not necessarily low probability*impact.
As I mentioned in another comment, I think most Pascalian wagers that one comes across are fallacious because they miss even bigger Pascalian wagers that should be pursued instead. However, there are some Pascalian wagers that seem genuinely compelling even after looking for alternatives, like “the Overwhelming Importance of Shaping the Far Future.” My impression is that most elites do not agree that the far future is overwhelmingly important even after hearing your arguments because they don’t have linear utility functions and/or don’t like Pascalian wagers. Do you think most elites would agree with you about shaping the far future?
This highlights a meta-point in this discussion: Often what’s under debate here is not the framework but instead claims about (1) whether elites would or would not agree with a given position upon hearing it defended and (2) whether their sustained disagreement even after hearing it defended results from divergent facts, values, or methodologies (e.g., not being consequentialist). It can take time to assess these, so in the short term, disagreements about what elites would come to believe are a main bottleneck for using elite common sense to reach conclusions.
I disagree with the claim that the argument for shaping the far future is a Pascalian wager. In my opinion, there is a reasonably high, reasonably non-idiosyncratic probability that humanity will survive for a very long time, that there will be a lot of future people, and/or that future people will have a very high quality of life. Though I have not yet defended this claim as well as I would like, I also believe that many conventionally good things people can do push toward future generations facing future challenges and opportunities better than they otherwise would, which with a high enough and conventional enough probability makes the future go better. I think that these are claims which elite common sense would be convinced of, if in possession of my evidence. If elite common sense would not be so convinced, I would consider abandoning these assumptions.
Regarding the more purely moral claims, I suspect there are a wide variety of considerations which elite common sense would give weight to, and that very long-term considerations are one time of important consideration which would get weight according to elite common sense. It may also be, in part, a fundamental difference of values, where I am part of a not-too-small contingent of people who have distinctive concerns. However, in genuinely altruistic contexts, I think many people would give these considerations substantially more weight if they thought about the issue carefully.
Near the beginning of my dissertation, I actually speak about the level of confidence I have in my thesis quite tentatively:
I stand by this tentative stance.
I thought some of our disagreement might stem from understanding what each other meant, and that seems to have been true here. Even if the probability of humanity surviving a long time is large, there remain entropy in our influence and butterfly effects, such that it seems extremely unlikely that what we do now will actually make a pivotal difference in the long term, and we could easily be getting the sign wrong. This makes the probabilities small enough to seem Pascalian for most people.
It’s very common for people to say, “Predictions are hard, especially about the future, so let’s focus on the short term where we can be more confident we’re at least making a small positive impact.”
If by short-term you mean “what happens in the next 100 years or so,” I think there is something to this idea, even for people who care primarily about very long-term considerations. I suspect it is true that the expected value of very long-run outcomes is primarily dominated by totally unforeseeable weird stuff that could happen in the distant future. But I believe that the best way deal with this challenge is to empower humanity to deal with the relatively foreseeable and unforeseeable challenges and opportunities that it will face over the next few generations. This doesn’t mean “let’s just look only at short-run well-being boosts,” but something more like “let’s broadly improve cooperation, motives, access to certain types of information, narrow and broad technological capabilities, and intelligence and rationality to deal with the problems we can’t foresee, and let’s rely on the best evidence we can to prepare for the problems we can foresee.” I say a few things about this issue here. I hope to say more about it in the future.
An analogy would be that if you were a 5-year-old kid and you primarily cared about how successful you were later in life, you should focus on self-improvement activities (like developing good habits, gaining knowledge, and learning how to interact with other people) and health and safety issues (like getting adequate nutrition, not getting hit by cars, not poisoning yourself, not falling off of tall objects, and not eating lead-based paint). You should not try to anticipate fine-grained challenges in the labor market when you graduate from college or disputes you might have with your spouse. I realize that this analogy may not be compelling, but perhaps it illuminates my perspective.
As you point out, one choice point is how much idealization to introduce. At one extreme, you might introduce no idealization at all, so that whatever you presently approve of is what you’ll assume is right. On the other extreme you might have a great deal of idealization. You may assume that a better guide is what you would approve of if you knew much more, had experienced much more, were much more intelligent, made no cognitive errors in your reasoning, and had much more time to think. I lean in favor of the other extreme, as I believe most people who have considered this question do, though recognize that you want to specify your procedure in a way that leaves some core part of your values unchanged. Still, I think this is a choice that turns on many tricky cognitive steps, any of which could easily be taken in the wrong direction. So I would urge that insofar as you are making a very unusual decision at this step, you should try to very carefully understand the process that others are going through.
ETA: I’d also caution against just straight-out assuming a particular meta-ethical perspective. This is not a case where you are an expert in the sense of someone who elite common sense would defer to, and I don’t think your specific version of anti-realism, or your philosophical perspective which says there is no real question here, are views which can command the assent of a broad coalition of trustworthy people.
My current meta-ethical view says I care about factual but not necessarily moral disagreements with respect to elites. One’s choice of meta-ethics is itself a moral decision, not a factual one, so this disagreement doesn’t much concern me. Of course, there are some places where I could be factually wrong in my meta-ethics, like with the logical reasoning in this comment, but I think most elites don’t think there’s something wrong with my logic, just something (ethically) wrong with my moral stance. Let me know if you disagree with this. Even with moral realists, I’ve never heard someone argue that it’s a factual mistake not to care about moral truth (what could that even mean?), just that it would be a moral mistake or an error of reasonableness or something like that.
I’m a bit flabbergasted by the confidence with which you speak about this issue. In my opinion, the history of philosophy is filled with a lot of people often smarter than you and me going around saying that their perspective is the unique one that solves everything and that other people are incoherent and so on. As far as I can tell, you are another one of these people.
Like Luke Muehlhauser, I believe that we don’t even know what we’re asking when we ask ethical questions, and I suspect we don’t really know what we’re asking when we asking meta-ethical questions either. As far as I can tell, you’ve picked one possible candidate thing we could be asking—”what do I care about right now?”—among a broad class of possible questions, and then you are claiming that whatever you want right now is right because that’s what you’re asking.
I think most people would just think you had made an error somewhere and not be able to say where it was, and add that you were talking about completely murky issue that people aren’t good at thinking about.
I personally suspect your error lies in not considering the problem from perspectives other than “what does Brian Tomasik care about right now?”.
[Edited to reduce rhetoric.]
I think it’s fair to say that concepts like libertarian free will and dualism in philosophy of mind are either incoherent or extremely implausible, though maybe the elite-common-sense prior would make us less certain of that than most on LessWrong seem to be.
Yes, I think most of the confusion on this subject comes from disputing definitions. Luke says: “Within 20 seconds of arguing about the definition of ‘desire’, someone will say, ‘Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.’”
Here I would say, “Screw ethics and meta-ethics. All I’m saying is I want to do what I feel like doing, even if you and other elites don’t agree with it.”
Sure, but this is not a factual error, just an error in being a reasonable person or something. :)
I should point out that “doing what I feel like doing” doesn’t necessarily mean running roughshod over other people’s values. I think it’s generally better to seek compromise and remain friendly to those with whom you want to cooperate. It’s just that this is an instrumental concession, not because I actually agree with the values that I’m willing to be nice to.
I think that there is a genuine concern that many people have when they try to ask ethical questions and discuss them with others, and that this process can lead to doing better in terms of that concern. I am speaking vaguely because, as I said earlier, I don’t think that I or others really understand what is going on. This has been an important process for many of the people I know who are trying to make a large positive impact on the world. I believe it was part of the process for you as well. When you say “I want to do what I want to do” I think it mostly just serves as a conversation-stopper, rather than something that contributes to a valuable process of reflection and exchange of ideas.
I think it is a missed opportunity to engage in a process of reflection and exchange of ideas that I don’t fully understand but seems to deliver valuable results.
I’m not always as unreasonable as suggested there, but I was mainly trying to point out that if I refuse to go along with certain ideas, it’s not dependent on a controversial theory of meta-ethics. It’s just that I intuitively don’t like the ideas and so reject them out of hand. Most people do this with ideas they find too unintuitive to countenance.
On some questions, my emotions are too strong, and it feels like it would be bad to budge my current stance.
Fair enough. :) I’ll buy that way of putting it.
Anyway, if I were really as unreasonable as it sounds, I wouldn’t be talking here and putting at risk the preservation of my current goals.
Whether you want to call it a theory of meta-ethics or not, and whether it is a factual error or not, you have an unusual approach to dealing with moral questions that places an unusual amount of emphasis on Brian Tomasik’s present concerns. Maybe this is because there is something very different about you that justifies it, or maybe it is some idiosyncratic blind spot or bias of yours. I think you should put weight on both possibilities, and that this pushes in favor of more moderation in the face of values disagreements. Hope that helps articulate where I’m coming from in your language. This is hard to write and think about.
Why do you think it’s unusual? I would strongly suspect that the majority of people have never examined their moral beliefs carefully and so their moral responses are “intuitive”—they go by gut feeling, basically. I think that’s the normal mode in which most of humanity operates most of the time.
I think other people are significantly more responsive to values disagreements than Brian is, and that this suggests they are significantly more open to the possibility that their idiosyncratic personal values judgments are mistaken. You can get a sense of how unusual Brian’s perspectives are by examining his website, where his discussions of negative utilitarianism and insect suffering stand out.
That’s a pretty meaningless statement without specifying which values. How responsive “other people” would be to value disagreements about child pornography, for example, do you think?
I suspect Nick would say that if there were respected elites who favored increasing the amount of child pornography, he would give some weight to the possibility that such a position was in fact something he would come to agree with upon further reflection.
Or, most likely of all, it’s because I don’t care to justify it. If you want to call “not wanting to justify a stance” a bias or blind spot, I’m ok with that.
:)