This is an account of some misgivings I’ve been having about the whole rationality/effective altruism world-view. I do expect some outsiders to think similarly.
So yesterday I was reading SSC, and there was an answer to some article about the EA community by someone [whose name otherwise told me nothing] who among other things said EAs were ‘white male autistic nerds’.
‘Rewind,’ said my brain.
‘Aww,’ I reasoned. ‘You know. Americans. We have some heuristics like this, too.’
‘...but what is this critique about?’
‘Get unstuck already. The EA is populated with young hard-working talented educated hopeful people...’
‘Let’s not join,’ brain ruled immediately. ‘We’re not like that!’
‘...who are out to save the world, eliminate suffering and maybe even defeat Death.’
Brain smirked. ‘I find it easier to believe in the WMAN than in the YHTEHS—fewer dice rolls… But even if all of it is true, and they do intend to do all this; how would they fail?’
‘Huh?’
‘Would they lose their jobs, if some angry developer rings up their boss? Would they get sued, and lose their jobs, if they protest unwisely? Would they get beaten up in a dark alley, and incidentally lose their jobs, if - ’
‘THE WHOLE POINT is that you don’t risk your own skin. You efficiently pay others to do it, hopefully without the actual risking, and in this way more people benefit. And stop being bloody-minded.’
‘Well, good luck making more people join. We want to have lived. (In case there ain’t no Singularity coming soon.) We believe experience. We believe failure.’
‘Failure isn’t efficient. And what are you about? That you want us to get beaten up?’
‘No, I want to see some price they pay for their ideas. Out of, you know, sheer malice. Like if you’re an environmentalist, then everybody around you knows what you must do better than yourself.’
‘They pay money, because people shouldn’t be heroes to do good. Shouldn’t have to be sad to do good. Or angry. Even if it helps.’
Brain thought for a moment.
‘Okay. But why do they expect others to be sad, angry or heroes? You buy a malaria net as an Effective Altruist, you kinda make a contract with somebody who uses it, like Albus Dumbledore giving the Cloak of Invisibility to Harry Potter. For your money to have mattered, that person would have to live in unceasing toil.’
‘Which is in their best interests anyway.’
‘...in more toil than you could ever imagine. And sorrow. And make efficient decisions. Aren’t you morally obliged to keep helping?’
‘If a builder sells a house, is he morally obliged to keep repairing it?’ I shrugged. ‘Legally, perhaps, if the house falls down.’
‘Then I want to know what an Effective Altruist does when the house falls down, in the absence of any law that can force him,’ said the brain. ‘Surely he is more responsible than the builder?’
(I think) I’m arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.
I assume you meant “more ethical” rather than “more efficient”? In other words, the correct metric shouldn’t just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.
This is an account of some misgivings I’ve been having about the whole rationality/effective altruism world-view. I do expect some outsiders to think similarly.
So yesterday I was reading SSC, and there was an answer to some article about the EA community by someone [whose name otherwise told me nothing] who among other things said EAs were ‘white male autistic nerds’.
‘Rewind,’ said my brain.
‘Aww,’ I reasoned. ‘You know. Americans. We have some heuristics like this, too.’
‘...but what is this critique about?’
‘Get unstuck already. The EA is populated with young hard-working talented educated hopeful people...’
‘Let’s not join,’ brain ruled immediately. ‘We’re not like that!’
‘...who are out to save the world, eliminate suffering and maybe even defeat Death.’
Brain smirked. ‘I find it easier to believe in the WMAN than in the YHTEHS—fewer dice rolls… But even if all of it is true, and they do intend to do all this; how would they fail?’
‘Huh?’
‘Would they lose their jobs, if some angry developer rings up their boss? Would they get sued, and lose their jobs, if they protest unwisely? Would they get beaten up in a dark alley, and incidentally lose their jobs, if - ’
‘THE WHOLE POINT is that you don’t risk your own skin. You efficiently pay others to do it, hopefully without the actual risking, and in this way more people benefit. And stop being bloody-minded.’
‘Well, good luck making more people join. We want to have lived. (In case there ain’t no Singularity coming soon.) We believe experience. We believe failure.’
‘Failure isn’t efficient. And what are you about? That you want us to get beaten up?’
‘No, I want to see some price they pay for their ideas. Out of, you know, sheer malice. Like if you’re an environmentalist, then everybody around you knows what you must do better than yourself.’
‘They pay money, because people shouldn’t be heroes to do good. Shouldn’t have to be sad to do good. Or angry. Even if it helps.’
Brain thought for a moment.
‘Okay. But why do they expect others to be sad, angry or heroes? You buy a malaria net as an Effective Altruist, you kinda make a contract with somebody who uses it, like Albus Dumbledore giving the Cloak of Invisibility to Harry Potter. For your money to have mattered, that person would have to live in unceasing toil.’
‘Which is in their best interests anyway.’
‘...in more toil than you could ever imagine. And sorrow. And make efficient decisions. Aren’t you morally obliged to keep helping?’
‘If a builder sells a house, is he morally obliged to keep repairing it?’ I shrugged. ‘Legally, perhaps, if the house falls down.’
‘Then I want to know what an Effective Altruist does when the house falls down, in the absence of any law that can force him,’ said the brain. ‘Surely he is more responsible than the builder?’
I don’t follow. Are you arguing that saving a person’s life is irresponsible if you don’t keep saving them?
(I think) I’m arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.
I assume you meant “more ethical” rather than “more efficient”? In other words, the correct metric shouldn’t just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.