Currently doing the SERI MATS 4.0 Training Program (multipolar stream). Former summer research fellow at Center on Long-Term Risk, former intern at Center for Reducing Suffering, Wild Animal Initiative, Animal Charity Evaluators. Former co-head of Haverford Effective Altruism.
Research interests: • AI alignment • animal advocacy from a longtermist perspective • acausal interactions • artificial sentience • s-risks • updatelessness
Feel free to contact me for whatever reason! You can set up a meeting with me here.
JamesFaville
Muting the self-critical brain loop (and thanks for that terminology!) is something I’m very interested in. Have you investigated vegan alternatives to fish oil at all?
(Disclaimer: There’s a good chance you’ve already thought about this.)
In general, if you want to understand a system (construal of meaning) forming a model of the output of that system (truth-conditions and felicity judgements) is very helpful. So if you’re interested in understanding how counterfactual statements are interpreted, I think the formal semantics literature is the right place to start (try digging through the references here, for example).
I don’t have much of a thoughtful opinion on the question at hand yet (though I have some questions below), but I wanted to express a deep appreciation for your use of detail elements: it really helps readability!
One concern I would want to see addressed is an estimation of negative effects of a “brain drain” on regional economies- if a focused high-skilled immigration policy has the potential to exacerbate global poverty, the argument that it has a positive impact on the far future needs to be very compelling. So would these economic costs be significant, or negligible? And would a more broadly permissive immigration policy have similar advantages? Also, given the scope of the issues at hand I would be very surprised if the advantages you ascribe to high-skilled immigration are all of roughly equal expected value: is there one which you think dominates the others? (Like reduced x-risk from AI?)
Upvoted mostly for surprising examples about obstetrics and CF treatment and for a cool choice of topic. I think your question, “when is one like the doctors saving CF patients and when is one like the doctors doing super-radical mastectomies?” is an important one to ask, and distinct from questions about modest epistomology.
Say there is a set of available actions of which a subset have been studied intensively enough that their utility is known with high degree of certainty, but that the utility of the other available actions in is uncertain. Then your ability to surpass the performance of an agent who chooses actions only from essentially comes down to a combination of whether choosing uncertain-utility actions from precludes also picking high-utility actions from , and what the expected payoff is from choosing uncertain-utility actions in according to your best information.
I think you could theoretically model many domains like this, and work things out just by maximizing your expected utility. But it would be nice to have some better heuristics to use in daily life. I think the most important questions to ask yourself are really (i) how likely are you to horribly screw things up by picking an uncertain-utility action, and (ii) do you care enough about the problem you’re looking at to take lots of actions that have a low chance of being harmful, but a small chance of being positive.
Have a look at 80K’s (very brief) career profile for party politics. My rough sense is that efective altruists generally agree that pursuing elected office can be a very high-impact career path for individuals particularly well-suited to it, but think that even with an exceptional candidate succeeding is very difficult.
The case against “geospermia” here is vastly overstated: there’s been a lot of research over the past decade or two establishing very plausible pathways for terrestrial abiogensis. If you’re interested, read through some work coming out of Jack Szostak’s lab (there’s a recent review article here). I’m not as familiar with the literature on prebiotic chemistry as I am with the literature on protocell formation, but I know we’ve found amino acids on meteorites, and it wouldn’t be surprising if they and perhaps some other molecules which are important to life were introduced to earth through meteorites rather than natural syntheses.
But in terms of cell formation, the null hypothesis should probably be that it occured on Earth. Panspermia isn’t ridiculous per se, but conditions on Earth appear to have been much more suitable for cell formation than those of the surrounding neighborhood, and sufficiently suitable that terrestrial abiogensis isn’t implausible in the least. When it comes to ways in which there could be wild-animal suffering on a galactic scale, I think the possibility of humans spreading life through space colonization is far more concerning.
Also, Zubrin writes:
Furthermore, it needs to be understood that the conceit that life originated on Earth is quite extraordinary. There are over 400 billion of stars in our galaxy, with multiple planets orbiting many of them. There are 51 billion hectares on Earth. The probability that life first originated on Earth, rather than another world, is thus comparable to the probability that the first human on our planet was born on any particular 0.1 hectare lot chosen at random, for example my backyard. It really requires evidence, not merely an excuse for lack of evidence, to be supported.
This is poor reasoning. A better metaphor would be that we’re looking at a universe with no water except for a small pond somewhere, and wondering where the fish that currently live in that pond evolved. If water is so rare, why shouldn’t we be confused that the pond exists in the first place? Anthropic principle (but be careful with this). Disclaimer: Picking this out because I thought it was the most interesting part in the piece, not because I went looking for bad metaphors.
As a meta-note, I was a little suspicious of this piece based on some bad signaling (the bio indicates potential bias, tables are made through screenshots, the article looks like it wants to be in a journal but is hosted on a private blog). I don’t like judging things based on potentially spurious signals, but this might have nevertheless biased me a bit and I’m updating slightly in the direction of those signals being valuable.
I found the last six paragraphs of this piece extremely inspiring, to the extent that I think it nonnegligably raised the likelihood that I’ll be taking “exceptional action” myself. I didn’t personally connect much with the first part, though it was interesting. Did you used to want to want your reaction to idiocy be “‘how can I help’”, even when it wasn’t?
I like this post’s brevity, its usefulness, and the nice call-to-action at the end.
I’m downvoting this because it appears to be a low-effort post which doesn’t contribute or synthesize any interesting ideas. Prime Intellect is the novel that first comes to mind as discussing some of what you’re talking about, but several chapters are very disturbing, and there’s probably better examples out there. If you have Netflix, San Junipero (Season 3 Episode 4) of Black Mirror is fantastic and very relevant.
I’m downvoting this post because I don’t understand it even after your reply above, and the amount of negative karma currently on the post indicates to me that it’s probably not my fault. It’s possible to write a poetic and meaningful post about a topic and pleasant when someone has done so well, but I think you’re better off first trying to state explicitly whatever you’re trying to state to make sure the ideas are fundamentally plausible. I’m skeptical that meditations on a topic of this character are actually helpful to truth-seeking, but I might be typical-minding you.
Essentially, I read this as an attempt at continental philosophy rather than analytic philosophy, and I don’t find continental-style work very interesting or useful. I believe you that the post is meaningful and thoughtful, but the costs of time or effort to understand the meanings or thoughts you’re driving at are too high for me at least. I think trying to lay things out in a more organized and explicit manner would be helpful for your readers and possibly for you in developing these thoughts.
I don’t want to get too precise about answering the above unless you’re still interested in me doing so and don’t mind me stating things in a way that might come across as pretty rude. Also, limiting myself to one more reply here since I should really stop procrastinating work, and just in case.
What I’m taking away from this is that if (i) it is possible for child universes to be created from parent universes, and if (ii) the “fertility” of a child universe is positively correlated with that of its parent universe, then we should expect to live in a universe which will create lots of fertile child universes, whether this is accomplished through a natural process or as you suggest through inhabitants of the universe creating fertile child universes artificially.
I think that’s a cool concept, and I wrote a quick Python script for a toy model to play around with. Your consequences seem kind of implausible to me though (I might try to write more on that later).
Possible low-hanging fruit: name tags.
Why do you think we should be more worried about reading fiction? Associated addictiveness, time consumption, escapism?
Is skilled hunting unethical?
Thanks for the feedback Raemon!
Concrete Concerns
I’d like to see [“when predators are removed from a system, a default thing that seems to happen is that death-by-predator is replaced by death-by-starvation” and “how do you do population control without hunting?”] at least touched on in wild-animal-suffering pieces
I’d like to see those talked about too! The reason I didn’t is I really don’t have any insights on how to do population control without hunting, or on which specific interventions for reducing wild animal suffering are promising. I could certainly add something indicating I think those sorts of questions are important, but that I don’t really have any answers beyond “create welfare biology” and “spread anti-speciesism memes so that when we have better capabilities we will actually carry out large interventions”.
have a table of contents of the issues at hand
I had a bit of one in the premise (“wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors”), but it sounds like you might be looking for something different/more specific? You’re not talking about a table of contents consisting of more or less the section headings right?
Aiming to Persuade vs Inform
My methodology was “outline different reasons why skilled hunting could remain an unethical action”, but I did a poor job of writing if the article seemed as though I thought each reason was likely to be true! I did put probabilities on everything to calculate the 90% figure at the top, but since I don’t consider myself especially well-calibrated I thought it might be better to leave them off… The only reason that I think is actually more likely to be valid than wrong is #3, but I do assign enough probability mass to the others that I think they’re of some concern.
I thought the arguments in favor of skilled hunting (make hunters happy and prevent animals from experience lives which might involve lots of suffering) were pretty apparent and compelling, but I might be typical-minding that. I also might be missing something more subtle?
In terms of whether that methodology was front-page appropriate, I do think that if the issue I was writing about was something slightly more political this would be very bad. But as I saw it, the main content of the piece isn’t the proposition that skilled hunting is unethical, it’s the different issues that come up in the process discussing it (“wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors”). My goal is not to persuade people that I’m right and you must not hunt even if you’re really good at it, but to talk about interesting hammers in front of an interesting nail.
[Edit: Moved to personal blog.]
I don’t think the vaccination example shows that the heuristic is flawed: in the case of vaccinations, we do have strong evidence that vaccinations are net-positive (since we know their impact on disease prevalance, and know how much suffering there can be associated with vaccinatable diseases). So if we start with a prior that vaccinations are evil, we quickly update to the belief that vaccinations are good based on the strength of the evidence. This is why I phrased the section in terms of prior-setting instead of evidence, even though I’m a little unsure how a prior-setting heuristic would fit into a Bayesian epistimology. If there’s decently strong evidence that skilled hunting is net-positive, I think that should outweigh any prior developed through the children’s movie heuristic. But in the absence of such evidence, I think we should default to the naive position of it being unethical. Same with vaccines.
I’d be interested to know if you can think of a clearer counterexample though: right now, I’m basing my opinion of the heuristic on a notion that the duck test is valuable when it comes to extrapolating moral judgements from a mess of intuitions. What I have in mind as a counterexample is a behavior that upon reflection seems immoral but without compelling explicit arguments on either side, for which it is much easier to construct a compelling children’s movie whose central conceit is that the behavior is correct than it is to construct a movie with the conceit that the behavior is wrong (or vice-versa).
I think normal priors on moral beliefs come from a combination of:
Moral intuitions
Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like “vaccines reduce disease prevalence”)
Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)
I think the “Disney test” is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn’t picking up many of these reasons, I’d expect this algorithm to be useful.
In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.
I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn’t have otherwise considered, whose plausibility I’ve come to think is undervalued upon reflection. Here’s a case where I think the algorithm might fail: wealth redistribution. There’s a natural bias towards not wanting strong redistributive policies if you’re wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn’t seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it’s easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).
Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)
I actually like the idea of building a “rationalist pantheon” to give us handy, agenty names for important but difficult concepts. This requires more clearly specifying what the concept being named is: can you clarify a bit? Love Wizard of Earthsea, but don’t get what you’re pointing at here.
At what age do you all think people have the greatest moral status? I’m tempted to say that young children (maybe aged 2-10 or so) are more important than adolescents, adults, or infants, but don’t have any particularly strong arguments for why that might be the case.