Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna’s statement “Don’t trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully.” (E.g. after purchasing more information.)
However, Anna next says:
I’ve talked about [this estimate] with a lot of people and the bargain seems robust. Maybe you go for a soft takeoff scenario, [then the estimate] comes out maybe an order of magnitude lower. But it still comes out [as] unprecedentedly much goodness that you can purchase for a little bit of money or time.
And that is something I definitely disagree with. I don’t think the estimate is anywhere near that robust.
I agree with Luke’s comment; compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact form donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust.
my estimate of impact form donation re: AI risk is lower (though still high)
Out of curiosity, what’s your current estimate? I recognize it’ll be rough, but even e.g. “more likely than not between $1 and $50 per life saved” would be interesting.
And that is something I definitely disagree with. I don’t think the estimate is anywhere near that robust.
Is this MIRI official position? Because, AFAIK that estimate was never retracted.
Anyway, the problem doesn’t seem to be much with the exact numbers, but with the process: what she did was essentially a travesty of a Fermi estimate, where she pulled numbers of out thin air and multiplied them together to get a self-serving result.
This person is “Executive Director and Cofounder” of CFAR. Is this what they teach for $1,000 a day? How to fool yourself by performing a mental ritual with made up numbers?
Is this MIRI official position? Because, AFAIK that estimate was never retracted.
I don’t know what Anna’s current view is. (Edit: Anna has now given it.)
In general, there aren’t such things as “MIRI official positions,” there are just individual persons’ opinions at a given time. Asking for MIRI’s official position on a research question is like asking for CSAIL’s official opinion on AGI timelines. If there are “MIRI official positions,” I guess they’d be board-approved policies like our whistleblower policy or something.
You are ignoring that the slide being projected as she was saying it emphasises the point—it was being treated as an important point to make.
“It’s out of context!” is a weaselly argument, and one that, having watched the video and read the transcript, I really just don’t find credible. It’s not at all at odds with the context. The context is fully available. Anna made that claim, she emphasised it as a point worth noting beforehand in the slide deck, she apparently meant it at the time. You’re attempting to discredit Kruel in general by ad hominem, and doing so in a manner that is simply not robust.
I see nowhere the claim that Kruel pretended to quote from that video.
That’s clearly a rough estimate of the value of a positive singularity, and MIRI only studies one pathway to it. MIRI donations are not fungible with donations to a positive singularity, which needs to be true for Kruel’s misquote to be even roughly equivalent to what Salamon actually said.
Even if we grant that unstated premise, there’s her disclaimer that the estimate (of the value of a positive singularity) is important to be written down explicitly (Principle 1 @ 7:15) even if it is inaccurate and cannot be trusted (Principle 2 directly afterward).
Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned; saying people should be extremely skeptical of his claims is not pulling an ad hominem.
I see nowhere the claim that Kruel pretended to quote from that video.
12:31. “You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives.” There’s a slide up at that moment making the same claim. It wasn’t a casual aside, it was a point that was part of the talk.
Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned;
He wasn’t in this case, and you haven’t shown it in any other case. Do you have a list to hand?
Please respond to the second paragraph of my previous comment, which explains why this doesn’t mean what Kruel claims it means. Also note that I am not claiming it was not an important point in her talk.
Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned;
He wasn’t in this case, and you haven’t shown it in any other case. Do you have a list to hand?
You claim he wasn’t. I find three serious misrepresentations. 1) The original estimate was not about MIRI funding; 2) The original estimate was heavily disclaimed excepting a statement about “robustness”; 3) Salamon retracted it, including the robustness claim.
That said, on several occasions I failed to adopt the above principles and have often mocked MIRI/LW when it would have been better to engage in more serious criticism. But I did not fail completely. See for example my primer on AI risks or the interviews that I conducted with various experts about those risks. I cannot say that MIRI/LW has been trying to rephrase the arguments of their critics in the same way that I did, or went ahead and asked experts to review their claims.
(emphasis added). Note that this comment was posted three weeks before his post on the Salamon misquote.
I don’t think you should form your opinion of Anna from this video. It gave me an initially very unfavorable impression that I updated away from after a few in-person conversions.
(If you read the other things I write you’ll know that I’m nowhere close to a MIRI fanatic so hopefully the testimonial carries some weight.)
“8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon”
http://kruel.co/2013/09/22/machine-intelligence-research-institute-some-numbers/
Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna’s statement “Don’t trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully.” (E.g. after purchasing more information.)
However, Anna next says:
And that is something I definitely disagree with. I don’t think the estimate is anywhere near that robust.
I agree with Luke’s comment; compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact form donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust.
Out of curiosity, what’s your current estimate? I recognize it’ll be rough, but even e.g. “more likely than not between $1 and $50 per life saved” would be interesting.
Is this MIRI official position? Because, AFAIK that estimate was never retracted.
Anyway, the problem doesn’t seem to be much with the exact numbers, but with the process: what she did was essentially a travesty of a Fermi estimate, where she pulled numbers of out thin air and multiplied them together to get a self-serving result.
This person is “Executive Director and Cofounder” of CFAR. Is this what they teach for $1,000 a day? How to fool yourself by performing a mental ritual with made up numbers?
I don’t know what Anna’s current view is. (Edit: Anna has now given it.)
In general, there aren’t such things as “MIRI official positions,” there are just individual persons’ opinions at a given time. Asking for MIRI’s official position on a research question is like asking for CSAIL’s official opinion on AGI timelines. If there are “MIRI official positions,” I guess they’d be board-approved policies like our whistleblower policy or something.
Thanks for the answer
You are ignoring that the slide being projected as she was saying it emphasises the point—it was being treated as an important point to make.
“It’s out of context!” is a weaselly argument, and one that, having watched the video and read the transcript, I really just don’t find credible. It’s not at all at odds with the context. The context is fully available. Anna made that claim, she emphasised it as a point worth noting beforehand in the slide deck, she apparently meant it at the time. You’re attempting to discredit Kruel in general by ad hominem, and doing so in a manner that is simply not robust.
I see nowhere the claim that Kruel pretended to quote from that video.
That’s clearly a rough estimate of the value of a positive singularity, and MIRI only studies one pathway to it. MIRI donations are not fungible with donations to a positive singularity, which needs to be true for Kruel’s misquote to be even roughly equivalent to what Salamon actually said.
Even if we grant that unstated premise, there’s her disclaimer that the estimate (of the value of a positive singularity) is important to be written down explicitly (Principle 1 @ 7:15) even if it is inaccurate and cannot be trusted (Principle 2 directly afterward).
Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned; saying people should be extremely skeptical of his claims is not pulling an ad hominem.
12:31. “You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives.” There’s a slide up at that moment making the same claim. It wasn’t a casual aside, it was a point that was part of the talk.
He wasn’t in this case, and you haven’t shown it in any other case. Do you have a list to hand?
Please respond to the second paragraph of my previous comment, which explains why this doesn’t mean what Kruel claims it means. Also note that I am not claiming it was not an important point in her talk.
You claim he wasn’t. I find three serious misrepresentations. 1) The original estimate was not about MIRI funding; 2) The original estimate was heavily disclaimed excepting a statement about “robustness”; 3) Salamon retracted it, including the robustness claim.
As for XiXi’s history of acting in bad faith, you should be more than familiar with it. But if you insist, here is his characterization of his criticism:
(emphasis added). Note that this comment was posted three weeks before his post on the Salamon misquote.
I don’t think you should form your opinion of Anna from this video. It gave me an initially very unfavorable impression that I updated away from after a few in-person conversions.
(If you read the other things I write you’ll know that I’m nowhere close to a MIRI fanatic so hopefully the testimonial carries some weight.)
I wonder how many lives does donating a dollar to CFAR save? 8^8?
I’ve actually watched the video. So that’s what passes for “rationality” at MIRI/CFAR? LoL!