One of the comments you linked has an edit showing they updated towards this position.
This is a non-trivial insight and reframe, and I’m not going to try and write a better explanation than Scott and Magdalena. But, if you take the time to get it and respond with clear understanding of the frame I’m open to taking a shot at answering stuff.
I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance) TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake:
This comment looks to me like you’re missing the main insight of finite factored sets. Suggest reading https://www.lesswrong.com/posts/PfcQguFpT8CDHcozj/finite-factored-sets-in-pictures-6 and some of the other posts, maybe https://www.lesswrong.com/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets and https://www.lesswrong.com/posts/qhsELHzAHFebRJE59/a-greater-than-b-greater-than-a until it makes sense why a bunch of clearly competent people thought this was an important contribution.
One of the comments you linked has an edit showing they updated towards this position.
This is a non-trivial insight and reframe, and I’m not going to try and write a better explanation than Scott and Magdalena. But, if you take the time to get it and respond with clear understanding of the frame I’m open to taking a shot at answering stuff.
I don’t believe you have something to gain by insisting on using the word “time” in a technical jargon sense—or do you mean something different than “if self-fulfilling prophecies can be seen as choosing one of imagined scenarios, and you imagine there are agents in those scenarios, you can also imagine as if those future agents will-have-influenced your decision today, as if they acted retro-causally”? Is there a need for an actual non-physical philosophy that is not just a metaphor?
There’s a non trivial conceptual clarification / deconfusion gained by FFS on top of the summary you made there. I put decent odds on this clarification being necessary for some approaches to strongly scalable technical alignment.
(a strong opinion held weakly, not a rigorous attempt to refute anything, just to illustrate my stance)
TypeError: obviously, any correct data structure for this shape of the problem must be approximating an infinite set (Bayesian), thus must be implemented lazy/generative, thus must be learnable, thus must be redundant and cannot possibly be factored ¯\_(ツ)_/¯
also, strong alignment is impossible and under the observation that we live in the least dignified world, so doom will be forward-caused by someone who thinks alignment is possible and makes a mistake: