Thanks! Ah, I shouldn’t have put the word “portable” in there then. I meant to be talking about computers in general, not computers-on-missiles-as-opposed-to-ground-installations.
Also, the whole setup of picking something which we already know to be widespread (cars) and then applying Joe’s arguments to it, seems like it’s shouldn’t tell us much. If Joe were saying 1% yes, 99% no for incentives to build APS systems, then the existence of counterexamples like cars which have similar “no” arguments would be compelling. But he’s saying 80% yes, 20% no, and so the fact that there are some cases where his “no” arguments fail is unsurprising—according to him, “no” arguments of this strength should fail approximately 80% of the time.
I think I agree with this, and should edit to clarify that that’s not the argument I’m making… what I’m saying is that sometimes, “Of course X requires Y” and “of course Y will be useful/incentivised” predictions can be made in advance, with more than 90% confidence. I think “computers will be militarily useful” and “self-propelled vehicles will be useful” are two examples of this. The intuition I’m trying to pump is not “Look at this case where X was useful, therefore APS-AI will be useful” but rather “look at this case where it would have been reasonable for someone to be more than 90% confident that X was useful despite the presence of Joe’s arguments; therefore, we should be open to the possibility that we should be more than 90% confident that APS-AI will be useful despite Joe’s arguments.” Of course I still have work to do, to actually provide positive arguments that our credence should be higher than 90%… the point of my analogy was defensive, to defend against Joe’s argument that because of such-and-such considerations we should have 20% credence on Not Useful/Incentivised.
(Will think more about what you said and reply later)
I guess I just don’t feel like you’ve established that it would have been reasonable to have credence above 90% in either of those cases. Like, it sure seems obvious to me that computers and automobiles are super useful. But I have a huge amount of evidence now about both of those things that I can’t really un-condition on. So, given that I know how powerful hindsight bias can be, it feels like I’d need to really dig into the details of possible alternatives before I got much above 90% based on facts that were known back then.
(Although this depends on how we’re operationalising the claims. If the claim is just that there’s something useful which can be done with computers—sure, but that’s much less interesting. There’s also something useful that can be done with quantum computers, and yet it seems pretty plausible that they remain niche and relatively uninteresting.)
Fair. If you don’t share my intuition that people in 1950 should have had more than 90% credence that computers would be militarily useful, or that people at the dawn of steam engines should have predicted that automobiles would be useful (conditional on them being buildable) then that part of my argument has no force on you.
Maybe instead of picking examples from the past, I should pick an example of a future technology that everyone agrees is 90%+ likely to be super useful if developed, even though Joe’s skeptical arguments can still be made.
Thanks! Ah, I shouldn’t have put the word “portable” in there then. I meant to be talking about computers in general, not computers-on-missiles-as-opposed-to-ground-installations.
I think I agree with this, and should edit to clarify that that’s not the argument I’m making… what I’m saying is that sometimes, “Of course X requires Y” and “of course Y will be useful/incentivised” predictions can be made in advance, with more than 90% confidence. I think “computers will be militarily useful” and “self-propelled vehicles will be useful” are two examples of this. The intuition I’m trying to pump is not “Look at this case where X was useful, therefore APS-AI will be useful” but rather “look at this case where it would have been reasonable for someone to be more than 90% confident that X was useful despite the presence of Joe’s arguments; therefore, we should be open to the possibility that we should be more than 90% confident that APS-AI will be useful despite Joe’s arguments.” Of course I still have work to do, to actually provide positive arguments that our credence should be higher than 90%… the point of my analogy was defensive, to defend against Joe’s argument that because of such-and-such considerations we should have 20% credence on Not Useful/Incentivised.
(Will think more about what you said and reply later)
I guess I just don’t feel like you’ve established that it would have been reasonable to have credence above 90% in either of those cases. Like, it sure seems obvious to me that computers and automobiles are super useful. But I have a huge amount of evidence now about both of those things that I can’t really un-condition on. So, given that I know how powerful hindsight bias can be, it feels like I’d need to really dig into the details of possible alternatives before I got much above 90% based on facts that were known back then.
(Although this depends on how we’re operationalising the claims. If the claim is just that there’s something useful which can be done with computers—sure, but that’s much less interesting. There’s also something useful that can be done with quantum computers, and yet it seems pretty plausible that they remain niche and relatively uninteresting.)
Fair. If you don’t share my intuition that people in 1950 should have had more than 90% credence that computers would be militarily useful, or that people at the dawn of steam engines should have predicted that automobiles would be useful (conditional on them being buildable) then that part of my argument has no force on you.
Maybe instead of picking examples from the past, I should pick an example of a future technology that everyone agrees is 90%+ likely to be super useful if developed, even though Joe’s skeptical arguments can still be made.