Eliezer has not published a detailed explanation of his estimates, although he has published many of his arguments for his estimates.
Are these available? Are they the standard stuff (i.e., “Evidence and Import”)?
~
For myself, I think the cause of AI risk reduction, in total and over time, has a worthwhile small-to-medium probability of making an astronomical difference on our civilization’s future
How do you arrive at that conclusion? I’m less skeptical of the cause-specific claim than the organization-specific claim, but it’s worth digging deeper into.
Eliezer...Are these available? Are they the standard stuff (i.e., “Evidence and Import”)?
Yes, and his posts about intelligence explosion on Overcoming Bias, this, this, and unfortunately comments scattered around Less Wrong or various interveiws that would take some work to find and gather in one place.
How do you arrive at that conclusion? I’m less skeptical of the cause-specific claim than the organization-specific claim, but it’s worth digging deeper into.
Nick Bostrom’s book on superintelligence probably provides the best single treatment now, having synthesized most pre-existing work. It is moving towards publication, but you might ask him if you can read the draft.
Most pre-existing work? I would’ve said “having synthesized ~5% of pre-existing work related to superintelligence strategy that has been done at or near MIRI and FHI.”
Good news! Having now read the near-finished draft, my new guess is that Bostrom’s book synthesizes more like 20% of pre-existing work related to superintelligence strategy that has been done at or near MIRI and FHI. A lot has been added to the book since April. It’s really killing me that the book won’t be published until mid 2014.
One can delve indefinitely into any subtopic, but with diminishing returns. Do you think that it doesn’t address most of the higher-level topic areas, if not all of the issues arising therein?
No, I think it does a pretty good job of that. I’m not arguing that the book should be different than it is. I’m just saying that it definitely doesn’t synthesize “most” pre-existing work.
Are these available? Are they the standard stuff (i.e., “Evidence and Import”)?
~
How do you arrive at that conclusion? I’m less skeptical of the cause-specific claim than the organization-specific claim, but it’s worth digging deeper into.
Yes, and his posts about intelligence explosion on Overcoming Bias, this, this, and unfortunately comments scattered around Less Wrong or various interveiws that would take some work to find and gather in one place.
Nick Bostrom’s book on superintelligence probably provides the best single treatment now, having synthesized most pre-existing work. It is moving towards publication, but you might ask him if you can read the draft.
Most pre-existing work? I would’ve said “having synthesized ~5% of pre-existing work related to superintelligence strategy that has been done at or near MIRI and FHI.”
Good news! Having now read the near-finished draft, my new guess is that Bostrom’s book synthesizes more like 20% of pre-existing work related to superintelligence strategy that has been done at or near MIRI and FHI. A lot has been added to the book since April. It’s really killing me that the book won’t be published until mid 2014.
One can delve indefinitely into any subtopic, but with diminishing returns. Do you think that it doesn’t address most of the higher-level topic areas, if not all of the issues arising therein?
No, I think it does a pretty good job of that. I’m not arguing that the book should be different than it is. I’m just saying that it definitely doesn’t synthesize “most” pre-existing work.