This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
if I didn’t think the difficulty is insurmountable and one should lose hope already.
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Efforts to figure out what otherworldly superintelligences are up to.