But in this case, “more likely to be true” means something like “a good enough argument to move my priors by roughly an order of magnitude, or two at the outside”. Since in the face of our ignorance of the future, reasonable priors could differ by several orders of magnitude, even the best arguments I’ve seen aren’t enough to dismiss any “side” as silly or not worthy of further consideration (except stuff that was obviously silly to begin with).
I was intuitively tempted to retort a bunch of things about likelyness of exception and information taken into consideration, but I realized before posting that I was actually falling victim to several biases in that train of thought. You’ve actually given me a new way to think of the issue. I’m still of the intuition that any new way to think about it will only reinforce my beliefs and support the S.I. over time, though.
For now, I’m content to concede that I was weighing too heavily on my priors and my confidence in my own knowledge of the universe (on which my posteriors for AI issues inevitably depend, in one way or another), among possibly more mistakes. However, it seems at first glance to be even more evidence for the need of a new mathematical or logical language to discuss these questions more in depth, detail and formality.
But in this case, “more likely to be true” means something like “a good enough argument to move my priors by roughly an order of magnitude, or two at the outside”. Since in the face of our ignorance of the future, reasonable priors could differ by several orders of magnitude, even the best arguments I’ve seen aren’t enough to dismiss any “side” as silly or not worthy of further consideration (except stuff that was obviously silly to begin with).
That’s a very good point.
I was intuitively tempted to retort a bunch of things about likelyness of exception and information taken into consideration, but I realized before posting that I was actually falling victim to several biases in that train of thought. You’ve actually given me a new way to think of the issue. I’m still of the intuition that any new way to think about it will only reinforce my beliefs and support the S.I. over time, though.
For now, I’m content to concede that I was weighing too heavily on my priors and my confidence in my own knowledge of the universe (on which my posteriors for AI issues inevitably depend, in one way or another), among possibly more mistakes. However, it seems at first glance to be even more evidence for the need of a new mathematical or logical language to discuss these questions more in depth, detail and formality.