Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and “trades” that occur because of extortion or manipulation.
If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you’re likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.
But I’m not really clear on why you’re talking about GDP at all. It seems like you’re taking the claim that “GDP is a bad metric for value creation”, and concluding that “interventions like give directly are a misguided.”
Rereading this thread, I come to
If people who can pay their own rent are actually doing nothing by default, that implies that our society’s credit-allocation system is deeply broken. If so, then we can’t reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.
Is the argument something like...
1. GDP is is irreparably corrupt, as a useful measure. Folks often take it as a measure of how much value is created, but it is actually just as much a measure of how much violence is being done.
2. This is an example of a more general problem: All of our metrics for tracking value are similarly broken. Our methods of allocating credit don’t work at all.
3. Given that we don’t have robust methods for allocating credit, we can’t trust that anything good happens when give money to the actual organization “Give Directly”. For all we know that money gets squandered on activities that superficially look like helping, but are actually useless or harmful. (This is a reasonable supposition, because this is what most organizations do on priors.)
4. Given that we can’t trust giving money to Give Directly does any good, our only hope for doing good is to actually make sense of what is happening in the world so that we can construct credit allocation systems on which we can actually rely.
This is something like a 9 - gets the overall structure of the argument right with some important caveats:
I’d make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
An important part of the reason for 3 is that, the larger the share of “knowledge work” that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn’t personally checked, when there’s any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I’d personally be surprised if GiveDirectly didn’t actually exist, or simply pocketed the money. But it’s not at all obvious to me that people without my privileged knowledge should be sure of that.
credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Is that right? Can you say more about why you expect this to be a general problem?
. . .
I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.
If I do a stack trace on why I think that...
I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with money. If they were lying about every having given any money to anyone in Africa, I’m confident that someone would notice that, and blow the whistle, and the perpetrators would be in jail. (A better hidden, but consequently less extreme incidence of embezzlement is much more plausible, though I would still expect it to be caught eventually.)
They’re sending some somewhat costly-to-fake signals of actually trying to help. For instance, I heard on a blog once, that they were doing an RCT, to see if cash transfers actually improve people’s lives. (I think. I may just be wrong about the simple facts, here.) Most charities don’t do anything like that, and most of the world doesn’t fault them for it. Plus it sounds like a hassle. The only reasons why you would organize an RCT, are 1) you are actually trying to figure out if your intervention works 2) you have a very niche marketing strategy that involves sending costly signals of epistemic virtue, to hoodwink people like me into thinking “Yay Give Directly”, or 3) some combination of 1 and 2, whereby you’re actually interested in the answer, and also part of your motivation is knowing how much it will impress the EAs.
I find it implausible that they are doing strictly 2, because I don’t think the idea would occur to anyone who wasn’t genuinely curious. 3 seems likely.
Trust-chains: They are endorsed by people who are respected by people who’s epistemics I trust. GiveWell endorsed them. I personally have not read GiveWell’s evaluations in much depth, but I know that many people around me including, for instance Carl Shulman, have engaged with them extensively. Not only does everyone around me have oodles of respect for Carl, but I can personally verify (with a small sample size of interactions), that his thinking is extremely careful and rigorous. If Carl thought that GiveWell’s research was generally low quality, I would expect this to be a known, oft-mentioned thing (and I would expect his picture to not be on the OpenPhil website). Carl, is of course, only an example. There are other people around who’s epistemics I trust, who find GiveWell’s research to be good enough to be worth talking about. (Or at least old school GiveWell. I do have a sense that the magic has faded in recent years, as usually happens to institutions.)
I happen to know some of these people personally, but I don’t think that’s a Crux. Several years ago, I was a smart, but inexperienced college student. I came across LessWrong, and correctly identified that the people of that community had better epistemology than me (plus I was impressed with this Eliezer-guy who was apparently making progress on these philosophical problems, in sort of the mode that I had tried to make progress, but he was way ahead of me, and way more skilled.) On LessWrong, they’re talking a lot about GiveWell, and GiveWell recommended charities. I think it’s pretty reasonable to assume that the analysis going into choosing those charities is high quality. Maybe not perfect, but much better than I should be able to expect to do myself (as a college students).
It seems to me that I’m pretty correct in thinking that Give Directly does what it says it does.
You disagree though? Can you point at what I’m getting wrong?
My current understanding of your view: You think that institutional dysfunction and optimized misinformation is so common, that the evidence I note above is not sufficient to overwhelm a the prior, and I should assume that Give Directly is doing approximately nothing of value (and maybe causing harm), until I get much stronger evidence otherwise. (And that evidence should be of the form that I can check with my own eyes and my own models?)
I have a background expectation that the most blatant kinds of fraudulence will be caught.
Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers.
Consider that pretty much by definition we’re not aware of the most successful scams.
[Note that I’m shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.]
One could look at this evidence and think:
Wow. These fraudulent endeavors ran for really a long time. And the fact that they got caught means that they are probabilisticly not the best executed scams. This stuff must be happening all around us!
Or a person might look at this evidence and think:
So it seems that scams are really quite rare: there are only a dozen or-so scandals like this every decade. And they collapsed in the end. This doesn’t seem like a big part of the world.
Because this is a situation involving hidden evidence, I’m not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations.
I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn’t also dabbling in misinformation).
I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?
Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent.
Also Facebook, but “scam” is less obviously the right category.
Somewhat confused by the coca-cola example. I don’t buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with—on different timescales—mass media advertising, caffeine, and refined sugar.
It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that’s very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it’s providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos’ capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn’t seem like that’s what you are arguing for.
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Both—it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason.
I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we’re fractally at war with each other, we should expect most info channels that aren’t immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time.
For examples, important search terms are “preference falsification” and “Gell-Mann amnesia”.
I don’t think I disagree with you on GiveDirectly, except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct.
Interesting.
Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
I don’t know, certainly not off by more than a half billion in either direction? I don’t know how hard it is to estimate the number of people on earth. It doesn’t seem like there’s much incentive to mess with the numbers here.
It doesn’t seem like there’s much incentive to mess with the numbers here.
Guessing at potential comfounders—There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes.
Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and “trades” that occur because of extortion or manipulation.
If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you’re likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.
But I’m not really clear on why you’re talking about GDP at all. It seems like you’re taking the claim that “GDP is a bad metric for value creation”, and concluding that “interventions like give directly are a misguided.”
Rereading this thread, I come to
Is the argument something like...
1. GDP is is irreparably corrupt, as a useful measure. Folks often take it as a measure of how much value is created, but it is actually just as much a measure of how much violence is being done.
2. This is an example of a more general problem: All of our metrics for tracking value are similarly broken. Our methods of allocating credit don’t work at all.
3. Given that we don’t have robust methods for allocating credit, we can’t trust that anything good happens when give money to the actual organization “Give Directly”. For all we know that money gets squandered on activities that superficially look like helping, but are actually useless or harmful. (This is a reasonable supposition, because this is what most organizations do on priors.)
4. Given that we can’t trust giving money to Give Directly does any good, our only hope for doing good is to actually make sense of what is happening in the world so that we can construct credit allocation systems on which we can actually rely.
On a scale of 0 to 10, how close was that?
This is something like a 9 - gets the overall structure of the argument right with some important caveats:
I’d make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.
An important part of the reason for 3 is that, the larger the share of “knowledge work” that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn’t personally checked, when there’s any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I’d personally be surprised if GiveDirectly didn’t actually exist, or simply pocketed the money. But it’s not at all obvious to me that people without my privileged knowledge should be sure of that.
Ok. Great.
That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)
Presumably you believe that point 2 holds, not just because of the GDP example, but because you’ve seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?
Is that right? Can you say more about why you expect this to be a general problem?
. . .
I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.
If I do a stack trace on why I think that...
I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with money. If they were lying about every having given any money to anyone in Africa, I’m confident that someone would notice that, and blow the whistle, and the perpetrators would be in jail. (A better hidden, but consequently less extreme incidence of embezzlement is much more plausible, though I would still expect it to be caught eventually.)
They’re sending some somewhat costly-to-fake signals of actually trying to help. For instance, I heard on a blog once, that they were doing an RCT, to see if cash transfers actually improve people’s lives. (I think. I may just be wrong about the simple facts, here.) Most charities don’t do anything like that, and most of the world doesn’t fault them for it. Plus it sounds like a hassle. The only reasons why you would organize an RCT, are 1) you are actually trying to figure out if your intervention works 2) you have a very niche marketing strategy that involves sending costly signals of epistemic virtue, to hoodwink people like me into thinking “Yay Give Directly”, or 3) some combination of 1 and 2, whereby you’re actually interested in the answer, and also part of your motivation is knowing how much it will impress the EAs.
I find it implausible that they are doing strictly 2, because I don’t think the idea would occur to anyone who wasn’t genuinely curious. 3 seems likely.
Trust-chains: They are endorsed by people who are respected by people who’s epistemics I trust. GiveWell endorsed them. I personally have not read GiveWell’s evaluations in much depth, but I know that many people around me including, for instance Carl Shulman, have engaged with them extensively. Not only does everyone around me have oodles of respect for Carl, but I can personally verify (with a small sample size of interactions), that his thinking is extremely careful and rigorous. If Carl thought that GiveWell’s research was generally low quality, I would expect this to be a known, oft-mentioned thing (and I would expect his picture to not be on the OpenPhil website). Carl, is of course, only an example. There are other people around who’s epistemics I trust, who find GiveWell’s research to be good enough to be worth talking about. (Or at least old school GiveWell. I do have a sense that the magic has faded in recent years, as usually happens to institutions.)
I happen to know some of these people personally, but I don’t think that’s a Crux. Several years ago, I was a smart, but inexperienced college student. I came across LessWrong, and correctly identified that the people of that community had better epistemology than me (plus I was impressed with this Eliezer-guy who was apparently making progress on these philosophical problems, in sort of the mode that I had tried to make progress, but he was way ahead of me, and way more skilled.) On LessWrong, they’re talking a lot about GiveWell, and GiveWell recommended charities. I think it’s pretty reasonable to assume that the analysis going into choosing those charities is high quality. Maybe not perfect, but much better than I should be able to expect to do myself (as a college students).
It seems to me that I’m pretty correct in thinking that Give Directly does what it says it does.
You disagree though? Can you point at what I’m getting wrong?
My current understanding of your view: You think that institutional dysfunction and optimized misinformation is so common, that the evidence I note above is not sufficient to overwhelm a the prior, and I should assume that Give Directly is doing approximately nothing of value (and maybe causing harm), until I get much stronger evidence otherwise. (And that evidence should be of the form that I can check with my own eyes and my own models?)
Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers.
Consider that pretty much by definition we’re not aware of the most successful scams.
Related: The Scams Are Winning
[Note that I’m shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.]
One could look at this evidence and think:
Or a person might look at this evidence and think:
Because this is a situation involving hidden evidence, I’m not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations.
I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn’t also dabbling in misinformation).
I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?
Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent.
Also Facebook, but “scam” is less obviously the right category.
Somewhat confused by the coca-cola example. I don’t buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition?
It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with—on different timescales—mass media advertising, caffeine, and refined sugar.
It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.
I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that’s very different from the thing with Theranos.
I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it’s providing me value above and beyond those those benefits, and outweighing the costs in certain situations.
Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos’ capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn’t seem like that’s what you are arguing for.
Both—it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason.
I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we’re fractally at war with each other, we should expect most info channels that aren’t immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time.
For examples, important search terms are “preference falsification” and “Gell-Mann amnesia”.
I don’t think I disagree with you on GiveDirectly, except that I suspect you aren’t tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what’s your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?
Interesting.
I don’t know, certainly not off by more than a half billion in either direction? I don’t know how hard it is to estimate the number of people on earth. It doesn’t seem like there’s much incentive to mess with the numbers here.
Guessing at potential comfounders—There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes.