Yeah, this is the one that I would have used.
What evidence would convince you otherwise? Would superhuman performance in games that require difficult coordination be compelling?
Deepmind has outlined Hanabi as one of the next games to tackle: https://arxiv.org/abs/1902.00506
True, though in this case someone had made a reply, so it wouldn’t make much of a difference. Agree though that we should probably get around to building UI to delete leaf-comments.
Yeah, I have intuitively the same interpretation.
My model is also that there is indeed lots of competing notational syntax in probability theory, and that some people would tell you that the current notation being used is invalid, or stands for something weird and meaningless. So I do think explaining the notation and the choice of notation in detail here is a good idea.
Yeah, I also noticed this a while ago and was quite sad.
Full link, since this one just goes to the frontpage of the blog: http://conversableeconomist.blogspot.com/2019/04/one-case-for-keeping-statistical.html
That is indeed acknowledged in the post:
The different scientific methods from different thinkers were largely playing with the same elements. Still, they are united by all involving some degree of empiricism, some degree on reason, and are for the purpose of producing naturalistic explanations of physical reality.
Mode note: Fixed the old semi-broken HTML formatting for this post. Old formatting is still available as a revision.
Mod note: made into linkpost
Oh, sorry. I didn’t mean to imply that there isn’t a real difference here. I was just commenting on the specific statement “the appropriate description would be” which does seem primarily to be a statement about wording, and not about ethics.
I am not claiming that they exist. I am asking you to consider what you would do in the hypothetical in which you are convinced that they exist.
Yeah, that feels like one of the hypotheses I am attracted to, though it feels wrong for a variety of other reasons.
Continuing more with the thought experiments, since I find your answers (as well as your confidence in them) surprising. I have a sense that you believe that the responses to these questions are obvious, and if that is true, I would be interested in whether you can generate an explanation that makes them obvious to me as well.
Let’s imagine the inverse scenario in which you travel in a spaceship away from earth to a new planet far away and you never expect to come back. The new planet has a million elephants on it, old earth has only 1000 thousand elephants left on it. Imagine the alternative scenario in which you never leave earth and stay with the 1000 elephants, and also never expect to leave for any other planet. Would you pay the same amount to save the 1000 elephants on earth in either case?
To make this concrete, the two compared scenarios are:
1. You are on earth, there are a million elephants on a far away planet you never expect to see, and you are offered a trade to save the last 1000 animals on earth
2. You are on a distant planet with a million elephants on it, far away earth’s last 1000 elephants are about to die and you are offered a trade to save them
If so, how is this different from the scenario in which there are a million elephants in a bunker you will never visit? Also, does this mean that your moral evaluation of the same group of animals changes as you travel in a spaceship from one planet to another?
(Also, in considering these, try to control for as much of the secondary benefits of elephants as possible. I.e. maybe imagine that you are the last human and try to account for the potential technological, hedonic and cultural benefits of having elephants around)
If all the elephants on Earth die, but elephants still exist in an alternate universe, it is not correct to say that “the elephant species yet survives”. Rather, the appropriate description would be “the elephant species has gone extinct; matters may, however (in this as in other things), be different in some alternate universe”.
I don’t think everyone who agrees with the OP would agree with this statement. At least I do not. Though this feels more like arguing definitions in a way that is less likely to result in much productive discourse.
If you were to put a gun to my head and force me to give an answer in a minute, I think, though it might honestly depend on the day, I would probably pay the same in each scenario. Though I would assign significant probability to having made the “wrong” choice (and I wish I could give you a clear and precise definition of what I mean by “wrong” here, but my metaethics have not reached reflective equilibrium, and so the closest thing I have is just “would preferred to have made a different choice given 10000 more years to think, with greater intelligence, all the world’s knowledge at my fingertips, etc.“)
Internally, this situation feels like I have something closer to an inside-view gears-like model that predicts that I should pay different amounts, combined with a “conservation of energy”/“dutch-book” like model that tells me that if I were to pay different amounts, I would have to be irrational in some way, even if I don’t know how precisely yet.
I don’t understand the Pascal’s mugging objection. What is the mugging here? Why are they “my elephants”?
I am not trying to convince you of anything here, I feel honestly confused about this question, and this is a question that I have found useful to ask myself in order to clarify my thinking on this.
What would your response be to the other question I posed in the thread?
Yeah, that is also roughly my response, though the other thought experiment I suggested in another comment feels very similar to me, and that reduction doesn’t really work in that case (though similar reductions kind of do). Interested in your response to that question.
As someone who also roughly beliefs what is outlined in the OP, I don’t think I would pay the same in each of these scenarios. I do have a model that I “should” pay the same in each scenario in a way that I cannot easily externalize, and definitely not in a way that is obvious to me.
In all but the second scenario, more than 1,000,000 million elephants do indeed “exist” (though the point of the exercise is at least in part to poke at what it means for something to exist), and so based on the argument made above, the first scenario would suggest the value of the marginal 1000 (which would move the total number of elephants from 1,001,000 to 1,000,000) elephants to be lower than in the second scenario (which would move the total number of elephants from 1,000 to 0).
Continuing in the tradition of socratic questioning, if you would respond with the same amount in all the scenarios above, would you also respond the same if there were 1 million elephants buried deep underground in a self-sustaining bunker on a different planet in our solar system, and you would never expect to interact with them further? Would your answer change if there was an easily available video-feed of the elephants that you could access from the internet?
A thought experiment that others have proposed to me when I brought up intuitions like this:
If it is worse for a species to go from 1000 members to 0 members than when it goes from 1,001,000 members to 1,000,000 members, then you start behaving very differently based on slightly different interpretations of physics. Compare your actions in the following scenarios:
There exists a planet just outside of humanities light-cone that has one million elephants on it, nothing you do can ever interact with them. How much would you pay to prevent the last one-thousand elephants that are still living on earth to survive?
There does not exist another elephant anywhere else in the universe, how much would you be willing to pay to save the last one-thousand elephants on earth
The universe is infinite and even in areas without life every 10^10^10 light years an elephant randomly assembles due to quantum fluctuations. How much would you be willing to pay for the last 1000 elephants on earth?
The universe is finite but the quantum multiverse contains many worlds in which elephants exist, though you can never experience them. How much do you pay for the last 1000 elephants on earth?
These arguments do not conclude definitely that you can’t have this kind of diminishing returns, but it does mean that your morality starts depending on very nuanced facts about cosmology.