If running a single copy of a given AI system (let’s call it SketchyBot) for 1 month has a 5% chance of destroying the world …
Even given entirely aleatoric risk, it’s not clear to me that the compounding effect is necessary.
Suppose my model for AI risk is a very naive one—when the AI is first turned on, its values are either completely aligned (95% chance) or unaligned (5% chance). Under this model, one month after turning on the AI, I’ll have a 5% chance of being dead, and a 95% chance of being an immortal demigod. Another month, year, or decade, and there’s still a 5% chance after a decade that I’m dead, and a 95% chance I’m an immortal demigod. Running other copies of the same AI in parallel doesn’t change that either.
More generally, it seems that any model of AI risk where self.goingToDestroyTheWorld() is evaluated exactly once isn’t subject to those sorts of multiplicative risks. In other words, 1 - .95**60 == we’re all dead only works under fairly specific conditions, no epistemic arguments required
In fact, the epistemic uncertainty can actually increase the total risk if my base is the evaluated once model. Adding other worlds where the AI decides if it wants to destroy the world each morning, or is fundamentally incompatible with humans no matter what we try, just moves that integral over all possible models towards doom.
I apologize as this is a theory I’m still working out myself.
No worries! Hashing out the details in our theories is always fun, and getting another perspective should be encouraged.
With that said, I think this theory could still use some more work.
The torrent or information actually transfers FASTER the more seeds / leeches there are.
That’s because there are more computers in use, yes. Adding more physical computers often increases speeds, but that’s not an ironclad rule. Changing how the host and client interact without adding more computers is unlikely to be incredibly helpful unless you’re fixing a mistake with the initial setup, and splitting one program on one supercomputer into multiple programs on the same supercomputer is almost certainly less efficient.
HOST AI contains the indexes for simulation PEOPLE. … The heavy lifting would be dispersed among the AI in the simulation PEOPLE.
Well, it seems clear that humans are part of the simulation. Our brains are made of normal matter, and cutting bits of them off materially affects how we think. Less morbidly, antidepressants (and a whole laundry list of other psychoactive drugs) can affect our worldview, moods, and thoughts. Those drugs are also made of normal matter, at least as far as we can tell, so there doesn’t seem to be a good way to keep a clear-cut distinction between the simulation people and the simulation universe.
Unfathomable amounts of them, which would be updated, deleted …etc. as deemed necessary by the HOST AI.
Does this line up with what we see in the real world? Do people exist in unfathomable numbers? change instantly? vanish without warning?
The heavy lifting would be dispersed among the AI in the simulation PEOPLE.
Using simulated systems to compute anything is almost always less efficient than just running the computations on the real computer. Compare the power of an old video game console to the power of a modern pc needed to emulate it—to correctly simulate even an old SNES, you need a very powerful computer. Using that simulated SNES to run anything as opposed to just running it on your real life computer would be insane, unless it’s an old game that can only on the SNES.
In short, running Breath of the Wild accurately requires fewer computational resources than accurately simulating an old SNES and playing the original Mario Bros on that simulated system. And that simulated system was designed for one reason alone—computation. Humans … kinda aren’t.
The Ai in the PEOPLE simulation would be unaware of the Environment simulation
Except that we are very clearly aware of our environment. I can see the house that I live in, and measure the temperature outside, among hundreds of other mundane universe-me interactions. More generally, it doesn’t make much sense to me to simulate an entire universe, simulate a bunch of human minds, and somehow not put them together.
A person sleeping may actually be in an idle state sharing it’s computing power among others.
Assuming that a sleeping person takes significantly fewer resources to simulate than a conscious one (doubtful), any reasonable computer would dynamically balance resources. The method you’re suggesting (the “person” program tells the host it can give up some resources) is called cooperative multitasking, and it dates all the way back to at least the Apollo guidance computer in the 1960s, if not even earlier. Note, we’ve largely moved to other forms of computer resource sharing because the cooperative approach has serious downsides.
Since both Ai (in this example) will strive to learn and expand it’s computing power.
I think you need a more rigorous definition of computing power. In a traditional sense, there are metrics based on number of transistors, floating point operations per second, and so on, but machine learning doesn’t affect that. Machine learning is usually a property of the software, not the hardware, and so does not affect the power of that hardware.
If you want a metric for “power” of a software agent, you’ll need to be very careful about how you define it.
Oh, and sorry for the wall of text :)
Several comments here, possibly motivated by my not entirely understanding your idea here.
It doesn’t seem obvious to me that it’s possible to reduce the computational difficulty of a simulation by “offloading” that difficulty onto another part of the simulation. You’re also a little unclear about what you mean by “computing power” to begin with,
Every AI within the simulation would get fragmented data
OK, what do you mean by this? Do you mean that each agent gets inputs from only some of the space, i.e. fog of war? Under that interpretation, it’s trivially true—I do not know everything.
Do you mean that each agent is itself computing some fraction of the simulation itself? Please note that the agent is part of that simulation, and you run into weird recursion problems. Yes, we have mental models of the universe, but the map is not the territory. Those models diverge from reality in very many significant ways.
The AI within the simulation would be unaware of this situation. As would the AI controlling the simulation.
This is the first you’ve mentioned of an adversarial (?) AI controlling the simulation as a whole. What would this imply, and how is it related to the paragraph before it?
Edit: Finally, how could “quantum neural net and you” be the right title for a post that is entirely unrelated to anything quantum?