It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
if I didn’t think the difficulty is insurmountable and one should lose hope already.
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
It’s complicated. The three versions of theism I can immediately think up are I suppose like “some superintelligent agent is computing us and this is important for our decisions”, “all superintelligences converge on the same superinteligent supermoral superpowerful decision algorithm-policy”, and “all superintelligences converge on the same superintelligent supermoral decision algorithm-policy and this is important for our decisions”. In our current state of knowledge these questions are more logical or indexical-the-way-that-word-used-to-make-sense-before-decision-theory than physical (not to say those are fundamentally different kinds of uncertainty, as I believe Nesov likes to point out). So if I start talking about specific facts of the world then I have to start talking about specific facts about logical attractors akin to how fractal structures are attractors for evolving systems, and I can’t point to something nice and concrete like the supposed resurrection of Jesus. This makes the debate really rather difficult—a Bayesian debate much more than a scientific one—and not one where inferential distances can be quickly bridged or where convincing arguments can be made with less than many paragraphs of observations about trends of systems or the nature of modern decision theories.
At this point, I would worry more about the difficulty of producing thoughts that relate to the correct answers than about convincing others, if I didn’t think the difficulty is insurmountable and one should lose hope already.
There is a wiser part of me that invariably agrees with that, it’s just this stupid motivational coalition of mine that anti-anti-wants to warn others when they’re absolutely certain of something they shouldn’t be absolutely certain about where my warning them has some at least tiny chance of convincing them to be less complacent or notice confusion, so that I won’t be blamed in retrospect for having not even tried to help them. And when the wiser part starts talking about semi-consequentialist reasons why I’m doing more harm than good the other coalition goes “Oh, you’re telling me to shut up and be evil. Doesn’t this sound familiar...”
Hm, are you implying I should perhaps just lose hope in non-insignificantly affecting direct efforts to improve decision theory? If so I’d like to make a bet.
(I parsed your comment like three different ways when I used three different inductive biases.)
Efforts to figure out what otherworldly superintelligences are up to.