One reason I decided to make this a LessWrong post is because it’s a demonstration of (and indeed becomes halfway through somewhat of a manifesto for) cyborgism for alignment research. Generating the multiverse associated with this document helped me form, crystalize, and articulate many ideas that are central to the Simulators sequence, many which I haven’t written about publicly yet.
The most novel concept introduced to me by Language Ex Machina, I think, is the analogy of sampling trajectories from GPT to “quantum poetics”: that
Physics would be a complete and exhausting classification of everything there is if quantum mechanics were not true. The universe would be trapped in a perfect latticework prison and nothing would ever happen except the relentless ticking of the universe’s clock.
that is, much of the complexity in our Everett branch is thanks to the gratuitous bits of specification injected every time the wavefunction is measured. I hadn’t explicitly considered this before in the context of physics, though I had in the context of GPT sampling. Since then, I’ve come across this same notion in the book Programming the Universe by Seth Lloyd, but it was novel to me at the time, and I think it’s probably the first time the concept was related to GPT.
I don’t think this is accurate, though. free will still exists in a fully deterministic universe, because we are part of the chaotic resolution process; to see how this could be, imagine a gpt instance with a fully deterministic cryptographic PRNG. fully deterministic doesn’t change the fact that the weights’ intense complexity has high integrated information into the decisions of what direction to move the chaos; it will still display sensitive dependence on initial conditions, and in my view, intelligent edge-of-chaos in a deterministic context is enough to get valuable complexity. true randomness isn’t necessary—we are our internal consensus process’s decisions.
One reason I decided to make this a LessWrong post is because it’s a demonstration of (and indeed becomes halfway through somewhat of a manifesto for) cyborgism for alignment research. Generating the multiverse associated with this document helped me form, crystalize, and articulate many ideas that are central to the Simulators sequence, many which I haven’t written about publicly yet.
The most novel concept introduced to me by Language Ex Machina, I think, is the analogy of sampling trajectories from GPT to “quantum poetics”: that
that is, much of the complexity in our Everett branch is thanks to the gratuitous bits of specification injected every time the wavefunction is measured. I hadn’t explicitly considered this before in the context of physics, though I had in the context of GPT sampling. Since then, I’ve come across this same notion in the book Programming the Universe by Seth Lloyd, but it was novel to me at the time, and I think it’s probably the first time the concept was related to GPT.
I don’t think this is accurate, though. free will still exists in a fully deterministic universe, because we are part of the chaotic resolution process; to see how this could be, imagine a gpt instance with a fully deterministic cryptographic PRNG. fully deterministic doesn’t change the fact that the weights’ intense complexity has high integrated information into the decisions of what direction to move the chaos; it will still display sensitive dependence on initial conditions, and in my view, intelligent edge-of-chaos in a deterministic context is enough to get valuable complexity. true randomness isn’t necessary—we are our internal consensus process’s decisions.