it has something to do with hyperstition, because hyperstition is the informational direction of time.
garrabrant inductors are an important component of understanding, after all. I had wondered if I’d ever need them; thanks, garrabrant, y’all were right all along.
garrabrant inductors may be the precursor of something simpler that has to do with information-theoretic relativity.
Kennaway is probably right in the other comment as long as information only flows backward through time by learning the shape of the mechanisms that defined the past of a particular local region, or so. It is the flow of information through a mechanism that allows predicting the future, thereby illuminating the shape of the emergent mechanism that is the network of reductionist mechanisms of each spot in a time-synchronous behavior.
Pondering this makes me wonder … trailed off, see next comment
Hyperstition in the accelerationist sense? I view hyperstitions (self-fulfilling prophecies) as a sort of extreme or degenerate form of control. It’s definitely real, but in the same sense that chaos magic (https://en.wikipedia.org/wiki/Chaos_magic#Belief_as_a_tool) is real, or like how “which rising star will be the next CEO” is a real question in a Moral Maze. I could be convinced otherwise but my sense is that this is a non-answer to the question of how real agents (with some partial coherence, and with reflective efforts to become more coherent) have their ultimate effects on the world determined.
Hmm. yeah, I meant self-fulfilling prophecy. I’ll have to ponder this response, I had been thinking of self-fulfilling prophecy as encompassing all control, including thermostats. Surprised you disagree.
Some version of that could make sense. But there’s something that seem saliently different about a person deciding to build a house and then building it, vs. say Mr. Trumpet going around telling Mr. Gold “oh yeah, Mr. Silver and Mr. Platinum are big investors on this project” and telling Mr. Silver and Mr. Platinum the conjugate thing until the project is pulled out of the ether as sufficiently invested in. In the latter case, from the investor’s perspective, the project is kind of a hyperstition. But actually they could decide to make choices about that, and Mr. Trumpet is in fact making choices about that.
Got distracted before finishing edits. Makes me wonder whether chatgpt’s commentary here asking it to come up with its own [parable of predict-o-matic](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern) - might have been more insightful than I realized at the time. It might actually be competitive as as a short parable of predict-o-matic, in a descriptive sense, if competing for the name.
Posting to shortform because there are many of these. While searching for posts to share as prior work, I found the parable of predict-o-matic, and found it to be a very good post about self-fulfilling prophecies (tag). I thought it would be interesting to see what ChatGPT had to say when prompted with a reference to the post. It mostly didn’t succeed. I highlighted key differences between each result. The prompt:
Describe the parable of predict-o-matic from memory.
samples (I hit retry several times):
The parable of predict-o-matic is a story about a powerful machine that can predict the future with incredible accuracy. It is a valuable tool for those who can afford it, but it has a dark side.
The machine is used by a wealthy and powerful man to gain even more wealth and power. He uses it to predict the stock market, elections, and other events, and he always comes out ahead.
However, one day, the man’s son begins to question the ethics of using the machine. He realizes that by predicting the future, the machine is taking away free will and choice from those who are not privy to its predictions.
The son eventually destroys the machine, but not before it predicts its own destruction. The parable ends with the message that although knowledge of the future may be useful, it also comes with great responsibility and the potential for harm.
Of course, there’s prior art it was pulling from, including the post on less wrong which is much longer and gets into the point more and explores all the interesting and varied ways it could go in reality at a company building such a thing, but I’ve never seen any presented in a distilled format like this. Very nicely done, IMO.
pre-theoretic note-quite-true notes:
it has something to do with hyperstition, because hyperstition is the informational direction of time.
garrabrant inductors are an important component of understanding, after all. I had wondered if I’d ever need them; thanks, garrabrant, y’all were right all along.
garrabrant inductors may be the precursor of something simpler that has to do with information-theoretic relativity.
Kennaway is probably right in the other comment as long as information only flows backward through time by learning the shape of the mechanisms that defined the past of a particular local region, or so. It is the flow of information through a mechanism that allows predicting the future, thereby illuminating the shape of the emergent mechanism that is the network of reductionist mechanisms of each spot in a time-synchronous behavior.
Pondering this makes me wonder … trailed off, see next comment
Hyperstition in the accelerationist sense? I view hyperstitions (self-fulfilling prophecies) as a sort of extreme or degenerate form of control. It’s definitely real, but in the same sense that chaos magic (https://en.wikipedia.org/wiki/Chaos_magic#Belief_as_a_tool) is real, or like how “which rising star will be the next CEO” is a real question in a Moral Maze. I could be convinced otherwise but my sense is that this is a non-answer to the question of how real agents (with some partial coherence, and with reflective efforts to become more coherent) have their ultimate effects on the world determined.
Hmm. yeah, I meant self-fulfilling prophecy. I’ll have to ponder this response, I had been thinking of self-fulfilling prophecy as encompassing all control, including thermostats. Surprised you disagree.
Some version of that could make sense. But there’s something that seem saliently different about a person deciding to build a house and then building it, vs. say Mr. Trumpet going around telling Mr. Gold “oh yeah, Mr. Silver and Mr. Platinum are big investors on this project” and telling Mr. Silver and Mr. Platinum the conjugate thing until the project is pulled out of the ether as sufficiently invested in. In the latter case, from the investor’s perspective, the project is kind of a hyperstition. But actually they could decide to make choices about that, and Mr. Trumpet is in fact making choices about that.
Got distracted before finishing edits. Makes me wonder whether chatgpt’s commentary here asking it to come up with its own [parable of predict-o-matic](https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern) - might have been more insightful than I realized at the time. It might actually be competitive as as a short parable of predict-o-matic, in a descriptive sense, if competing for the name.
Of course, there’s prior art it was pulling from, including the post on less wrong which is much longer and gets into the point more and explores all the interesting and varied ways it could go in reality at a company building such a thing, but I’ve never seen any presented in a distilled format like this. Very nicely done, IMO.