When someone has an incurable and lethal respiratory illness, I think we do require them to stay in quarantine and this is broadly accepted. The reason this doesn’t apply to HIV and other such diseases is that they are barely contagious.
RobertWiblin
“After they were launched, I got a marketing email from 80,000 Hours saying something like, “Now, a more effective way to give.” (I’ve lost the exact email, so I might be misremembering the wording.) This is not a response to demand, it is an attempt to create demand by using 80,000 Hours’s authority, telling people that the funds are better than what they’re doing already. ”
I write the 80,000 Hours newsletter and it hasn’t yet mentioned EA Funds. It would be good if you could correct that.
Exactly—if anything I am trying to make the job seem less appealing than it will be, so we attract only the right kind of person.
Needless to say, I think 1 is settled. As for the second point—Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it’s OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.
Thanks for your interest in our work.
As we say in the post, on this and most problem areas 80,000 Hours defers charity recommendations to experts on that particular cause (see: What resources did we draw on?). In this case our suggestion is based entirely on the suggestion of Chloe Cockburn, the Program Officer for Criminal Justice Reform at the Open Philanthropy Project, who works full time on that particular problem area and knows much more than any of us about what is likely to work.
To questions like “does 80,000 Hours have view X that would make sense of this” or “is 80,000 Hours intending to do X”—the answer is that we don’t really have an independent view on any of these things. We’re just syndicating content from someone we perceive to be an authority (just as we do when we include GiveWell’s recommended charities without having independently investigated them). I thought the article was very clear about this, but perhaps we needed to make it even more so in case people skipped down to a particular section without reading the preamble.
If you want to get these charities removed then you’d need to speak with Chloe. If she changes her suggestions—or another similar authority on this topic appears and offers a contrary view—then that would change what we include.
Regarding why we didn’t recommend the Center for Criminal Justice Reform: again, that is entirely because it wasn’t on the Open Philanthropy Project’s list of suggestions for individual donors. Presumably that is because they felt their own grant—which you approve of—had filled their current funding needs.
All the best,
Rob
I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.
I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I’m right.
Well I wasn’t proposing a strict quarantine or limits on travel. Merely preventing people from coming into close contact with colleagues at work where the risk of contagion is highest, and requiring them to have the option to reschedule their (expensive) travel. People are already familiar and comfortable with regulations in workplaces and aviation.
If I were proposing a thoroughgoing quarantine, I expect people wouldn’t be nearly as enthusiastic.
“Isn’t it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren’t worth living”
No, this makes perfect sense. 1. They decide animals are objects of moral concern. 2. Look into the conditions they live in, and decide that in some cases they are worse than not being alive. 3. Decide it’s wrong to fund expansion of a system that holds animals in conditions that are worse than not being alive at all.
From a selfish point of view, I don’t think most rationalists would benefit significantly from a bit of extra money, so it doesn’t make much sense to be dedicating their truly precious resource (time and attention) to identifying high-risk high-return investments like bitcoin and in this case figuring out how to buy/store them safely. And I’m someone who bought bitcoin for the sake of entertainment.
From an altruistic point of view, yes I expect hundreds of millions of dollars to be donated, and the current flow is consistent with that—I know of 5 million in the last few months, and there’s probably more than hasn’t been declared.
“then it’s no longer so plausible that “hundreds of millions is a substantial fraction as good as billions”.”
At the full community level the marginal returns on further donations also declines, though more slowly: https://80000hours.org/2017/11/talent-gaps-survey-2017/#how-diminishing-are-returns-in-the-community
This fundraiser has been promoted on the Effective Altruism Forum already, so you may find your questions answered on the thread:
http://effective-altruism.com/ea/hz/please_support_giving_what_we_can_this_spring/
http://effective-altruism.com/ea/j9/giving_what_we_can_needs_your_help/
I’ll re-post this comment as well:
“If I was going to add another I think it would be
Have fun
Talking to people who really disagree with you can represent a very enjoyable intellectual exploration if you approach it the right way. Detach yourself from your own opinions, circumstances and feelings and instead view the conversation as a neutral observer who was just encountering the debate for the first time. Appreciate the time the other person is putting into expressing their points. Reflect on how wrong most people have been throughout history and how hard it is to be confident about anything. Don’t focus just yet on the consequences or social desirability of the different views being expressed—just evaluate how true they seem to be on their merits. Sometimes this perspective is described as ‘being philosophical’.”
I think it’ll be faster to get a sense of that from a personal conversation.
That’s because chances for us to go extinct seem many. If we are necessarily held back from space for thousands of years, it’s very unlikely we would last that long just here on Earth.
I lurked until I read something I really disagreed with.
“Public declarations would only be signaling, having little to do with maximizing good outcomes.”
On the contrary, trying to influence other people in the AI community to share Eliezer’s (apparent) concern for the suffering of animals is very important, for the reason given by David.
“I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.”
a) Less Wrong doesn’t contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
Why is it worse to die (and people cryonically frozen don’t avoid the pain of death anyway) than to never have been born? Assuming the process of dying isn’t painful, they seem the same to me.
“If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people”
Amusingly, our suggestion of these two charities is entirely syndicated from a blog post put up by Holden Karnofsky himself: http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016
Possibly doing nothing is a good idea for hunter gatherers in case of starvation, but that seems worth checking in the anthropology research. If starvation were a frequent risk, lethargy would surely been prompted by insufficient food intake, which is rare for humans today. We wouldn’t just be lazy for that reason all the time; during times of abundance you ought to gather and store as much food as possible.
Apparently hunter gatherer bands were egalitarian, so it’s unlikely people would have been beaten up by (non-existent) leaders just for hunting and gathering well, especially given food was shared. Again the conditions under which people would be picked on in bands are something we can find out by looking at existing anthropology research. Nonetheless it’s hard to imagine that hunter gatherer bands which would push out members merely for contributing to the food intake of the group would be the most successful. We don’t favour do-nothings over well-meaning incompetents today as far as I can tell.
Collectively the community has made hundreds of millions from cypto. But it did so by getting a few wealthy people to buy many bitcoin, rather than many people to buy a few bitcoin. This is a more efficient model because it avoids big fixed costs for each individual.
It also avoid everyone in the community having to dedicate some of their attention to thinking about what outstanding investment opportunities might be available today.
Due to declining marginal returns, hundreds of millions is a substantial fraction as good as billions. So I think we did alright.