I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance) TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake:
I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance)
TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake: