A friend asked me a question I’d like to refer to LW posters.
TL;DR: he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?
My friend has a background in programming, physics, engineering, and information security and cryptography. He’s smart, he’s already financially successful, has friends who are also likely to become successful and influential, and he’s also good at direct interactions with people, reading and understanding them and being likable—about as good as I am capable of recognizing, which doesn’t mean that much because my own skills in this area are sadly lacking. A solution involving taking courses or whole degree plans in major Israeli universities (in particular, TAU) would suit him well but is by no means the only option.
He wants to spend time, perhaps as much as a small, part-time 3-year bachelor’s degree (or at-home equivalent), learning and understanding about larger groups of people. What makes them happy? How to influence their values? How to go from helping a person (“he’s hungry, I’ll need some fish and chips to feed him”) to helping a million people (“they’re hungry, I’ll need some farms to grow the food and trucks to move it and refrigerators to store it and power stations to power the refrigerators and coal for the power stations and political stability and...”)?
And in the bottom line, how to learn enough general knowledge to identify what people are most suffering from; then learn enough specific knowledge to identify where good solutions exist; and then learn some very specific knowledge to identify charities and investments that will make the best use of donated money?
There is also a second, complementary question: how to do all this, and integrate the learning and the knowledge into his life, effectively—without risking boredom, akrasia and other motivational issues? I feel that it would help for this education to have a good outline plan from the beginning; for him to feel things are useful and are progressing somewhere; and to have results come in gradually and not all at once in three years’ time.
One immediate answer is to suggest things that concern the LW/H+ community, such as FAI research, biological immortality, etc. My friend may come to these conclusions and I can recommend to him to read the relevant articles and books, but he wants to come to his own conclusions about goals & needs. (Edited:) (A problem with e.g. FAI research is the extreme difficulty of estimating the return on investment for funding it, or the relative probability of uFAI vs. other extinction scenarios.) I think he would benefit from something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them). He will also want to come to his own conclusions about whom to help first, likely quite far from any neutral approach that weighs all humans on the planet equally.
I don’t believe he’d be satisfied with any conclusion resting purely on thinking (“un-Friendly AI is an imminent existential risk, therefore FAI research is an overriding priority”); I think he needs something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them).
he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?
He could start with shut up and multiply. (Or, perhaps he could just change ‘best’ to ‘most appealing’.)
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.
A friend asked me a question I’d like to refer to LW posters.
TL;DR: he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?
My friend has a background in programming, physics, engineering, and information security and cryptography. He’s smart, he’s already financially successful, has friends who are also likely to become successful and influential, and he’s also good at direct interactions with people, reading and understanding them and being likable—about as good as I am capable of recognizing, which doesn’t mean that much because my own skills in this area are sadly lacking. A solution involving taking courses or whole degree plans in major Israeli universities (in particular, TAU) would suit him well but is by no means the only option.
He wants to spend time, perhaps as much as a small, part-time 3-year bachelor’s degree (or at-home equivalent), learning and understanding about larger groups of people. What makes them happy? How to influence their values? How to go from helping a person (“he’s hungry, I’ll need some fish and chips to feed him”) to helping a million people (“they’re hungry, I’ll need some farms to grow the food and trucks to move it and refrigerators to store it and power stations to power the refrigerators and coal for the power stations and political stability and...”)?
And in the bottom line, how to learn enough general knowledge to identify what people are most suffering from; then learn enough specific knowledge to identify where good solutions exist; and then learn some very specific knowledge to identify charities and investments that will make the best use of donated money?
There is also a second, complementary question: how to do all this, and integrate the learning and the knowledge into his life, effectively—without risking boredom, akrasia and other motivational issues? I feel that it would help for this education to have a good outline plan from the beginning; for him to feel things are useful and are progressing somewhere; and to have results come in gradually and not all at once in three years’ time.
One immediate answer is to suggest things that concern the LW/H+ community, such as FAI research, biological immortality, etc. My friend may come to these conclusions and I can recommend to him to read the relevant articles and books, but he wants to come to his own conclusions about goals & needs. (Edited:) (A problem with e.g. FAI research is the extreme difficulty of estimating the return on investment for funding it, or the relative probability of uFAI vs. other extinction scenarios.) I think he would benefit from something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them). He will also want to come to his own conclusions about whom to help first, likely quite far from any neutral approach that weighs all humans on the planet equally.
He could start with shut up and multiply. (Or, perhaps he could just change ‘best’ to ‘most appealing’.)
Rereading what I wrote, I don’t quite agree with it myself… I retract that part (will edit).
What I wanted to say (and did not in fact say) was this. To take the example of FAI research—it’s hard to measure or predict the value of giving money to such a cause. It doesn’t produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It’s hard to measure its progress for someone who isn’t at least an AI expert. It’s very hard to predict the FAI research team’s probability of success (as with any complex research). And finally, it’s hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.
If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.