I was reading through these publications one by one, thinking that there must be a quick way to download all pdf links from a page at once, and it turns out there is
RyanCarey(Ryan Carey)
Hi, another Australian visiting Berkley. Presumably this will happen on Wednesday the 21st
Collating the reccommendations:
$2/hr: buy an automatic dishwasher, assuming the dishwasher breaks down as soon as the warantee expires, and saves 30mins/day
$4/hr: buy a smartphone, assuming it costs $1/day and gives you 15 mins of useful time per day
$7/hr: getting laundry done professionally
$10/hr: eating dinner out
Hi all, Visiting from Australia, I’ll be there! Got a start time?
I’m going to have to miss this one. Enjoy.
More suggestions here: http://80000hours.org/blog/128-save-time-through-smart-buying
Ben, is this Friday meetup at the standard location, or is Richard hosting it?
I’m in.
Don’t you need to pay tax before you pay your living expenses? eg B70 = f(B68). And then B(69) = f(B70).
The absolute poverty line is US$2/day, purchasing power parity adjusted. You don’t earn less than $8/day.
Relative poverty is not having enough money to maintain the standard of living that is customary in that society.
The absolute poverty line is found by finding the total cost of all the essential resources that an average human adult consumes in one year. Determined by the world bank. This is adjusted for purchasing power parity. In other words, it applies internationally. The absolute american poverty line is just the international absolute poverty. And there’s no need for a relative poverty line, it’s rather a nonsense concept.
This is a good policy. I’ll see you all there.
As Bostrom seems to realise, he has made a strong argument for positive trajectory changes, not only reduction of x-risk, although this is the most obvious kind.
Here’s my interpretation of this post as a Venn Diagram. Discrete permanent changes are actions that changes the relative likelyhood of permanent future scenario like FAI, permanent global totalitarianism or annihilation. http://i.imgur.com/8TyxHXK.jpg
Accelerating economic development may be a (continuous) positive trajectory change, but it will very likely also bring forward existential risk from technology, a special kind of discrete trajectory change.
And neither that I doubt you, but what makes you think it’s cost-effective?
Great, Luke!
I think we should include Global Happiness Organisation as an effective altruist charity; one that rides across these four categories.
This seems reasonable. I guess they’re either not effective, or not providing evidence that they’re effective.
Their stated goals are altruistic and consequentialist, with concern for both animals and the distant future. They’re operated by utilitarians like Ludwig Lindstrom, James Evans, and Jasper Ostman, supported by Peter Singer; they want cultured meat, and seem to want to apply scientific research and measurement to improving welfare (this is the most promising of their policy proposals) I guess as you say, they’re altruistic only.
These activities plausibly belong in an EA portfolio, so I hope they can lift their game!
(If anyone from GHO can provide further information, this seems to be a suitable time and place.)
See you all there!
A great website, but I’d like to quickly point out that one of the core claims on this website doesn’t make sense at all - “It is a hard task to make the world a better place and many of the best possible things to do are unmeasured and unquantified. This means we can take guesses at how much impact we are having, but it is quite difficult to know for sure. We’re walking forward with blindfolds. A huge benefit of fundraising is that it is one of the easiest fields to quantify with a quick feedback loop and a clear metric of success—money moved. We can take off the blindfolds and see where we’re going.”
You can see how much money you’re raising, which is important, but you can’t see what the impact of the funds raised are, so you still don’t know where you’re going. Probably the target of your donation—even within a category like global health and development, or animal welfare—is much more important than the amount of money donated. The effect of a fivefold increase due to efficient fundraising could be dwarfed by this effect. This is even more the case when you talk about comparison between categories eg development vs x-risk.
Saying that fundraising takes off the blindfold because you can evaluate how much money you’re making is like saying that a speedometer takes off the blindfold when you’re driving, because you can tell how much you’re accelerating.
I still love this charity and the idea of effective fundraising, but this claim should be fixed.
Fairly good summary. I don’t mind the FAQ structure. The writing style is good, and the subject matter suggests obvious potential to contribute to the upcoming Wiki Felicifia in some way. Now as good as the essay is, I have some specific feedback:
In section 2.2, I wonder if you could at put your point more strongly...
you wrote: if morality is just some kind of metaphysical rule, the magic powers of the Heartstone should be sufficient to cancel that rule and make morality irrelevant. But the Heartstone, for all its legendary powers, is utterly worthless and in fact totally indistinguishable, by any possible or conceivable experiment, from a fake...
I would suggest: Metaphysical rules are like a kind of heartstone that one can wear when they make moral decisions. It is reputed to rule our moral considerations. But despite its reputation, the heartstone is utterly worthless and...”
If you’re going to use a metaphor, you might as well get full value from it!
2.61: I understand the point you’re making here. I couldn’t agree with it more. Still, if you’re trying to reduce the amount of words in the way between the reader and the later sections—as you should be—then this section is one you could consider abbreviating or removing. The whole phlogiston analogy is not obvious to a layperson.
Your line of thought seems to get somewhat deraied at 3.5. I don’t quite understand why ‘signalling’ fits in ‘assigning value to other people’.
4 is extremely good. The trolley discussions are reminiscent of Peter Unger’s Living High and Letting Die. It’s a shame it takes so long to get there.
Continued in this Felicifia post