If you mean, ‘you pulled that number out of your butt, and therefore I call you on it,’ then I’ll have to admit defeat due to inability to break it down quantitatively. Sorry.
Yeah. On one hand, I think there is something to be said about needing to make these fast and loose estimates and that there’s some basis for them. But on the other hand, I think one needs to recognize just how fast and loose they are. I think our error bars on MIRI’s chance of success is really high.
~
I think that’s taken out of context. The way I understand it, he means superintelligence will have a really high impact regardless (near 100% probability), and is therefore a ‘lever point’ which can have a higher probability of being impacted by anyone paying attention to it, and since MIRI is one of very few groups paying attention, they have a medium probability of being such an impactor.
Let me put that in premise-conclusion form:
P1: Superintelligence will, with probability greater than 99.999%, dramatically impact the future.
P2: One can change how superintelligence will unfold by working on superintelligence.
C3: Therefore from P1 and P2, working on superintelligence will dramatically impact the future.
P4: MIRI is one of the only groups working on superintelligence.
C5: Therefore from C3 and P4, MIRI will dramatically impact the future.
Do you think that’s right?
If so, I think P2 could be false, but I’ll accept it for the sake of argument. The real problem is, I think, C5 is a fallacy. It either assumes that any work in the domain will affect how superintelligence unfolds in a controlled way (which seems false) or that MIRI’s work will have impact (which seems unproven).
P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.
I’m not sure which fallacy you’re invoking, but saying (to paraphrase), ‘superintelligence is likely difficult to aim’ and ‘MIRI’s work may not have an impact’ are certainly possible, and already contribute to my estimates.
The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.
Yeah. On one hand, I think there is something to be said about needing to make these fast and loose estimates and that there’s some basis for them. But on the other hand, I think one needs to recognize just how fast and loose they are. I think our error bars on MIRI’s chance of success is really high.
~
Let me put that in premise-conclusion form:
P1: Superintelligence will, with probability greater than 99.999%, dramatically impact the future.
P2: One can change how superintelligence will unfold by working on superintelligence.
C3: Therefore from P1 and P2, working on superintelligence will dramatically impact the future.
P4: MIRI is one of the only groups working on superintelligence.
C5: Therefore from C3 and P4, MIRI will dramatically impact the future.
Do you think that’s right?
If so, I think P2 could be false, but I’ll accept it for the sake of argument. The real problem is, I think, C5 is a fallacy. It either assumes that any work in the domain will affect how superintelligence unfolds in a controlled way (which seems false) or that MIRI’s work will have impact (which seems unproven).
P1 is almost certainly an overestimate: independent of everything else, there’s almost certainly a greater than 0.001% chance that a civilization-ending event will occur before anyone gets around to building a superintelligence. The potential importance of AI research by way of this chain of logic wouldn’t be lowered too much if you used 80 or 90%, though.
I’m not sure which fallacy you’re invoking, but saying (to paraphrase), ‘superintelligence is likely difficult to aim’ and ‘MIRI’s work may not have an impact’ are certainly possible, and already contribute to my estimates.
I think a fair amount of people argue that because a cause is important, anyone working on that cause must be doing important work.
The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.