I have audiobook recommendations here.
Nick_Beckstead
Thanks!
Could you say a bit about your audiobook selection process?
I’d say Hothschild’s stuff isn’t that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don’t follow it but should, do follow it but shouldn’t, and don’t follow it and shouldn’t. There’s nothing systematic about it.
Hochschild’s own answer to my question is:
When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely, the public should not fall in line when leaders’ assertions are either empirically unsupported, or morally unjustified, or both. p. 536
This view seems to be the intellectual cousin of the view that we should just believe what is supported by good epistemic standards, regardless of what others think. (These days, philosophers are calling this a “steadfast” (as contrasted with “conciliatory”) view of disagreement.) I didn’t talk about this kind of view, largely because I find it very unhelpful.
I haven’t looked at Zaller yet but it appears to mostly be about when people do (rather than should) follow elite opinion. It sounds pretty interesting though.
What I mostly remember from that conversation was disagreeing about the likely consequences of “actually trying”. You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn’t know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.
Fair enough regarding how you want to spend your time. I think you’re mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you’d find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.
If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.
I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that—as you suggest—exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I’m thinking about how to live up to that agreement more.
Regarding the rest of it, I did say “or give less weight to them”.
I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.
Thanks for answering the main question.
I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like “Person X wouldn’t go for this” and “That cluster of people that seems good really wouldn’t go for this”, and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as “following the standards that seems credible to me upon reflection”, maybe we don’t disagree too much. If it doesn’t, I’d say it’s a substantial disagreement.
The main thing that I personally think we don’t need as much of is donations to object-level charities (e.g. GiveWell’s top picks). It’s unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...
I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I’m pretty on board here.
I think “writing blogposts criticizing mistakes that people in the EA community commonly make” is a moderate strawman of what I’d actually like to see, in that it gets us closer to being a successful movement but clearly won’t be sufficient on its own.
That was my first pass at how I’d try to start to try to increase the “self-awareness” of the movement. I would be interested in hearing more specifics about what you’d like to see happen.
Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can’t come to nontrivial conclusions already, the kind of facts we’re likely to find won’t help very much.
A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people’s ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.
Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it’s easier for us to tell if we’re making progress, we’ll learn how to learn about these issues more quickly.
I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By “the basics” I mean stuff like “who is working on synthetic biology?” in contrast with stuff like “what’s the right theory of population ethics?”.
You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.
I’d like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of “pretending to actually try.” People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.
As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you’re bringing up and have chosen to focus on other things. Big picture, I find claims like “your thing has problem X so you need to spend more resources on fixing X” more compelling when you point to things we’ve been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I’d be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I’ve considered doing so and agree this would be help address some of the issues you’ve identified. But I would welcome more of that kind of thing.
I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn’t say I settled all the issues, but I think we’d make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.
Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.
I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.
Edited: When I first read this, I thought you were saying you hadn’t brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn’t say I’d heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.
I agree that this would be good, but didn’t think it was worthwhile for me to go through the extra effort in this case. But I did think it was worthwhile to share what I had already found. I think I was very clear about how closely this had been vetted (which is to say, extremely little).
What if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be “timid” while making the downsides of “timidity” seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little.
I think it would be interesting if you could show that the space of possible periods-of-lives is structured in such a way that, when combined with a reasonable rule for discounting repetitions, yields a bounded utility function. I don’t have fully developed views on the repetition issue and can imagine that the view has some weird consequences, but if you could do this I would count it as a significant mark in favor of the perspective.
BTW what do you think about my suggestion to do a sequence of blog posts based on your thesis?
I think this would have some value but isn’t at the top of my list right now.
Also as an unrelated comment, the font in your thesis seems to be such that it’s pretty uncomfortable to read in Adobe Acrobat, unless I zoom in to make the text much larger than I usually have to. Not sure if it’s something you can easily fix. If not, I can try to help if you email me the source of the PDF.
I think I’ll keep with the current format for citation consistency for now. But I have added a larger font version here.
Also, it’s not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post that’s related to this.
I agree that Period Independence may break in the kind of case you describe, though I’m not sure. I don’t think that the kind of case you are describing here is a strong consideration against using Period Independence in cases that don’t involve exact repetition. I think your main example in the post is excellent.
OK, I”ll ask Paul or Stewart next time I see them.
Does your proposal also violate #1 because the simplicity of an observer-situated-in-a-world is a holistic property of the the observer-situated-in-a-world rather than a local one?
That aside, I do have an object-level comment. Nick states (in section 6.3.1) that Period Independence is incompatible with bounded utility function, but I think that’s wrong. Consider a total utilitarian who exponentially discounts each person-stage according to their distance from some chosen space-time event. Then the utility function is both bounded (assuming the undiscounted utility for each person-stage is bounded) and satisfies Period Independence.
I agree with this. I think I was implicitly assuming some additional premises, particularly Temporal Impartiality. I believe that bounded utility + Temporal Impartiality is inconsistent with bounded utility. (Even saying this implicitly assumes other stuff, like transitive rankings, etc., though I agree that Temporal Impartiality is much more substantive.)
Another idea for a bounded utility function satisfying Period Independence, which I previously suggested on LW and was originally motivated by multiverse-related considerations, is to discount or bound the utility assigned to each person-stage by their algorithmic probability.
I am having a hard time parsing this. Could you explain where the following argument breaks down?
Let A(n,X) be a world in which there are n periods of quality X.
The value of what happens during a period is a function of what happens during that period, and not a function of what happens in other periods.
If the above premise is true, then there exists a positive period quality X such that, for any n, A(n,X) is a possible world.
Assuming Period Independence and Temporal Impartiality, as n approaches infinity, the value of A(n,X) approaches infinity.
Therefore, Period Independence and Temporal Impartiality imply an unbounded utility function.
The first premise here is something I articulate in Section 3.2, but may not be totally clear given the informal statement of Period Independence that I run with.
Let me note that one thing about your proposal confuses me, and could potentially be related to why I don’t see which step of the above argument you deny. I primarily think of probability as a property of possible worlds, rather than individuals. Perhaps you are thinking of probability as a property of centered possible worlds? Is your proposal that the goodness of a world A with is of the form:
g(A) = well-being of person 1 prior centered world probability of person 1 in world A + well-being of person 2 prior centered world probability of person 2 in A + …
? If it is, this is a proposal I have not thought about and would be interested in hearing more about its merits and why it is bounded.
Would be interested to know more about why you think this is “fantastically wrong” and what you think we should do instead. The question the post is trying to answer is, “In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?” I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?
I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it’s worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.
The answers over the last 6 weeks have not been very repetitive at all. I’m not sure why this is exactly, since when I was much younger and would pray daily the answers were highly repetitive. It may have something to do with greater maturity and a greater appreciation of the purpose of the activity.
I think of the gratitude list as things that stood out as either among the best parts of the day or as unusually good (for you personally). And mistakes go the opposite way.
That sounds reeeeaaally suspicious in terms of potentially post-facto assignments. (Though defeasibly so—I can totally imagine a case being made for, “Yes, this really was generally visible to the person on the street at the time without benefit of hindsight.”)
This isn’t something I’ve looked into closely, though from looking at it for a few minutes I think it is something I would like to look into more. Anyway, on the Wikipedia page on diffusion of innovation:
This is the second fastest category of individuals who adopt an innovation. These individuals have the highest degree of opinion leadership among the other adopter categories. Early adopters are typically younger in age, have a higher social status, have more financial lucidity, advanced education, and are more socially forward than late adopters. More discrete in adoption choices than innovators. Realize judicious choice of adoption will help them maintain central communication position (Rogers 1962 5th ed, p. 283).”
I think this supports my claim that elite common sense is quicker to join and support new good social movements, though as I said I haven’t looked at it closely at all.
Can you use elite common sense to generate an near-term testable prediction that would sound bold relative to my probability assignments or LW generally?
I can’t think of anything very good, but I’ll keep it in the back of my mind. Can you think of something that would sound bold relative to my perspective?
How would this apply to social issues do you think? It seems that this is a poor way to be on the front of social change? If this strategy was widely applied, would we ever have seen the 15th and 19th amendments to the Constitution here in the US?
My impression is that the most trustworthy people are more likely to be at the front of good social movements than the general public, so that if people generally adopted the framework, many of the promising social movements would progress more quickly than they actually did. I am not sufficiently aware of the specific history of the 15th and 19th amendments to say more than that at this point.
There is a general question about how the framework is related to innovation. Aren’t innovators generally going against elite common sense? I think that innovators are often overconfident about the quality of their ideas, and have significantly more confidence in their ideas than they need for their projects to be worthwhile by the standards of elite common sense. E.g., I don’t think you need to have high confidence that Facebook is going to pan out for it to be worthwhile to try to make Facebook. Elite common sense may see most attempts at innovation as unlikely to succeed, but I think it would judge many as worthwhile in cases where we’ll get to find out whether the innovation was any good or not. This might point somewhat in the direction of less innovation.
However, I think that the most trustworthy people tend to innovate more, are more in favor of innovation than the general population, and are less risk-averse than the general population. These factors might point in favor of more innovation. It is unclear to me whether we would have more or less innovation if the framework were widely adopted, but I suspect we would have more.
On a more personal basis, I’m polyamorous, but if I followed your framework, I would have to reject polyamory as a viable relationship model. Yes, the elite don’t have a lot of data on polyamory, but although I have researched the good and the bad, and how it can work compared to monogamy, but I don’t think that I would be able to convince the elite of my opinions.
My impression is that elite common sense is not highly discriminating against polyamory as a relationship model. It would probably be skeptical of polyamory for the general person, but say that it might work for some people, and that it could make sense for certain interested people to try it out.
If your opinion is that polyamory should be the norm, I agree that you wouldn’t be able to convince elite common sense of this. My personal take is that it is far from clear that polyamory should be the norm. In any event, this doesn’t seem like a great test case for taking down the framework because the idea that polyamory should be the norm does not seem like a robustly supported claim.
What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it’s unclear to me whether we should talk about an “unpacking fallacy” or a “failure to unpack fallacy”.