Knowledge Seeker https://lorenzopieri.com/
lorepieri(Lorenzo Pieri)
On Falsifying the Simulation Hypothesis (or Embracing its Predictions)
Quantum computing is a very good point. I thought about it, but I’m not sure if we should consider it “optional”. Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.
Not sure I get what you mean by simpler universes. According to the SH simulated universes greatly outnumber any real universes.
The bold statement is to be able to actually extract experimental consequences also for passive simulations, even if only probabilistically. Active simulations are indeed interesting because they would give us a way to prove that we are in a simulation, while the argument in the post can only disprove that we are in one.
A possible problem with active simulations is that they may be a very small percentage of the total simulations, since they require someone actively interacting with the simulation. If this is true, we are very likely a passive simulation.
Regarding the first point, yes, that’s likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don’t compute the evolution of the system in the first place? You would end up having conflicting observations. It’s not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world. As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don’t keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map.
What I’m saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).
The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely. But here I’m speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators.
My view is that Kolmogorov is the right simplicity measure for probabilistically or brute force generated universes, as you also mention. But for intentionally generated universes the length and elegance of the program is not that relevant in determining how likely is a simulation to be run, while computational power and memory are hard constraints that the simulators must face.
For instance while I would expect unnecessary long programs to be unlikely to be run, if a long program L is 2x more efficient than a shorter program S, then I expect L to be more likely (many more simulators can afford L, cheaper to run in bulk, etc.).
Thanks for sharing, I will cite in a future v2 of the paper.
I don’t agree with simple --> highest probability of glitches, at least not always. For instance, if we restrict to the case of the same universe-simulating algorithms running on smaller portions of simulated space (same level of approximation). In that case running an algorithm on larger spaces may lead to more rounding errors.
That acceptance is in my experience due to lack of skills/intelligence. By realising that you don’t have enough skills/intelligence to withstand the (possible) consequences of speaking up, it is rational to comply with the rules and just hope that somebody else will bring the change.
Those extended simulations are more complex than non extended simulations. The simplicity assumptions tells you that those extended simulations are less likely, and the distribution is dominated by non extended simulations (assuming that they are considerably less complex).
To see this more clearly, take the point of view of the simulators, and for simplicity neglect all the simulations that are running t=now. So, consider all the simulations ever run by the simulators so far and that have finished. A simulation is considered finished when it is not run anymore. If a simulation of cost C1 is “extended” to 2 C1, then de facto we call it a C2 simulation. So, there is well defined distributions of finished simulations C1, C2 (including pure C2 and C1 extended sims), C3 (including pure C3, extended C2, very extended C1, and all the combinations), etc.
You can also include simulations running t=now in the distribution, even though you cannot be sure how to classify them until the finish. Anyway, for large t the number of simulations running now will be a small number w.r.t the number of simulations ever run.
Nitpick: A simulation is never really finished, as it can be reactivated at any time.
The problem I see with Ethereum is the tech itself. Is building a scalable and decentralised blockchain possible at all? Ethereum needs to get it right in few years, or it will lose the first mover advantage and other chains will take the lead.
On the other side, Bitcoin is already working as a decentralised store of value and doesn’t need crazy scalability, even though it would be beneficial (and necessary in order to be a daily currency).
-Polkadot has less than 300 validators at the moment, the system is not decentralised enough to support large attacks.
-Well, rising or at least stable. Considering that gold market cap is 10x bitcoin, and then bitcoin can be gold 2.0, there is definitely a large upside left. See also the stock-to-flow model applied to bitcoin.
I will briefly give it a shot:
Operative definition of knowledge K about X in a localised region R of spacetime:
Number N of yes/no questions (information) which a blank observer O can confidently answer about X, by having access to R.
Notes:
-Blank observer = no prior exposure to X. Obvious extension to observers which know something already about X.
-Knowledge makes sense only with respect to some entity X, and for a given observer O.
-Access to K in a given R may be very difficult, so an extension of this definition is enforcing a maximum effort E required to extract K. Max N obtained in this way is K.
-Equivalently, this can be defined in terms of probability distributions which are updated after every interaction of O with R.
-This definition requires having access to X, to verify that the content of R is sufficient to unambiguous to answer N questions. As such, it’s not useful to quantify accumulation of knowledge about things we don’t know entirely. But this has to be expected, I’m pretty sure one can map this to the halting problem.
Anyway, in the future it may be handy for instance to quantify if a computer vision system (and which part of it) has knowledge of objects it is classifying, say an apple.
-To make the definition more usable, one can limit the pool of questions and see which fraction of those can be answered by having access to R.
-The number N of questions should be pruned into classes of questions, to avoid infinities. (e.g. does an apple weighs less than 10kg? Less than 10.1kg? Less than 10.2kg? …)
Regarding, your attempts at: https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/YdxG2D3bvG5YsuHpG
-Mutual information between region and environment: Enforcing a max effort E implies that rocks have small amount of knowledge, since it’s very hard to reverse engineer them.
-Mutual information over digital abstraction layers: The camera cannot answer yes/no questions, so no knowledge. But a human with access to that camera certainly has more knowledge than one without.
-Precipitation of action: Knowledge is with respect to an observer. So no knowledge for the map alone.
Apparently many records have been subjected to cheating:
It’s a good point, but it’s like saying that to improve a city you can just bomb it and build it from scratch. In reality improvements need to be incremental and coexist with the legacy system for a while.
A Roadmap to a Post-Scarcity Economy
Awesome-github Post-Scarcity List
Awesome-github are indeed curated open-source lists. If you know better resources feel free to open a pull request so that I can incorporate those, thanks!
Nothing much to add to gbear605, there was no self-congratulatory intent here! I’m editing the title to make this a bit more clear.
The newly-created AGI will immediately kill everyone on the planet, and proceed to the destruction of the universe. Its sphere of destruction will expand at light speed, eventually encompassing everything reachable.
Why?
In fact, if not consensus, then at least the majority opinion amongst those mathematicians, computer scientists, and AI researchers who have given the subject more than a few days thought.
Is this true, or you have asked only inside an AI-pessimistic bubble?
And if True, why should opinions matter at all? Opinions cannot influence reality which is outside human control.
Overall I don’t see a clear argument about why should we worried about AGI. Quite the contrary, building AGI is still an active area of research with no clear solution.
Unpopular opinion (on this site I guess): AI alignment is not a well defined problem, there is no clear cut resolution to it. It will be an incremental process, similar to cybersecurity research.
About the money, I would do the opposite: select researchers that would do it for free, just pay them living expenses and give them arbitrary resources.
It is surely hard and tricky.
One of the assumptions of the original simulation hypothesis is that there are many simulations of our reality, and therefore we are with probability close to 1 in a simulation. I’m starting with the assumption that SH is true and extrapolating from that.
Boltzmann Brains are incoherent random fluctuations, so I tend to believe that they should not emerge in large numbers in an intentional process. But other kind of solipsistic observers may tend to dominate indeed. In that case though, the predictions of SH+SA are still there, since simulating the milky way for a solo observer is still much harder than simulating only the solar system for a solo observer.