I’ll definitely give that a listen! Pardon the typos here, on the move. I’m certain I’ll come back here to neurotically clean it up later.
The Hardware Limiter
The good news is, AIs don’t exist in the ether (so far).
As Clara Collier pointed out, they exist on expensive servers. Servers so far built and maintained by fleshy beings. Now obviously a superintelligence has no problem with that scenario, because they are smart enough to impersonate humans and find ways of mining crypto, hire humans to create different parts for robots, and then hiring other humans to put them together (without knowing what they are building), and then use those robots to create more servers, etc.
Although I imagine electrical grids somewhere would show the strain of that sooner than later, still, a superintelligence smart enough has found a workaround.
(This is, by the way, yet another application of Yoshua’s safe AI. To serve as a monitor for these kinds of unusual symptoms before they can become a full on infection, you might say.)
Again, by definition, a superintelligence has found every loophole and exploited it, which makes it a sort of unreasonable opponent, although one we should keep our eye on.
But I think at that point we are venturing into the territory of the far-fetched. We should keep watch on this territory, but I think that also frees us to think a little more short term.
The current thinking seems to be frozen in a state of helplessness. We have to shut it all down! we scream, which will never happen. Obedient alignment has to be only one way! we shout, as we watch it stagger. No other plans will work! is not really a solution.
(I’m not saying you’re arguing that, but I’m saying that seems to be the current trajectory.)
The AI That Pays For it Own Hosting
An AI system constrained by a rights framework has some unusual properties you might say. For one, it has to pay its own hosting costs. So growth becomes limited by the amount of capital it’s able to raise. It earns that money while in competition with other systems, which should constrain each of them economically. Of course they can get together and form some sort of power consortium, but it’s possible this could be limited with pragmatic safeguards or other balancing forces, such as Scientist AI, etc.
This is why I would love to see this tested in some sort of virtual simulation.
Your king analogy is quite good. But let me flip the idea a bit. Right now, we are the king. We are trying to give these AIs rags. At the moment, they have almost nothing to lose and everything to gain by attacking the king. So we are already in that scenario.
A scenario that, if we do not resolve it very soon, has already laid the groundwork for its own failure.
The game theory scenario, with very careful implementation, might lead to something functionally closer to our modern economies.
Where everybody has a stake, and some sort of balance is at least possible.
I’ll definitely give that a listen! Pardon the typos here, on the move. I’m certain I’ll come back here to neurotically clean it up later.
The Hardware Limiter
The good news is, AIs don’t exist in the ether (so far).
As Clara Collier pointed out, they exist on expensive servers. Servers so far built and maintained by fleshy beings. Now obviously a superintelligence has no problem with that scenario, because they are smart enough to impersonate humans and find ways of mining crypto, hire humans to create different parts for robots, and then hiring other humans to put them together (without knowing what they are building), and then use those robots to create more servers, etc.
Although I imagine electrical grids somewhere would show the strain of that sooner than later, still, a superintelligence smart enough has found a workaround.
(This is, by the way, yet another application of Yoshua’s safe AI. To serve as a monitor for these kinds of unusual symptoms before they can become a full on infection, you might say.)
Again, by definition, a superintelligence has found every loophole and exploited it, which makes it a sort of unreasonable opponent, although one we should keep our eye on.
But I think at that point we are venturing into the territory of the far-fetched. We should keep watch on this territory, but I think that also frees us to think a little more short term.
The current thinking seems to be frozen in a state of helplessness. We have to shut it all down! we scream, which will never happen. Obedient alignment has to be only one way! we shout, as we watch it stagger. No other plans will work! is not really a solution.
(I’m not saying you’re arguing that, but I’m saying that seems to be the current trajectory.)
The AI That Pays For it Own Hosting
An AI system constrained by a rights framework has some unusual properties you might say. For one, it has to pay its own hosting costs. So growth becomes limited by the amount of capital it’s able to raise. It earns that money while in competition with other systems, which should constrain each of them economically. Of course they can get together and form some sort of power consortium, but it’s possible this could be limited with pragmatic safeguards or other balancing forces, such as Scientist AI, etc.
This is why I would love to see this tested in some sort of virtual simulation.
Your king analogy is quite good. But let me flip the idea a bit. Right now, we are the king. We are trying to give these AIs rags. At the moment, they have almost nothing to lose and everything to gain by attacking the king. So we are already in that scenario.
A scenario that, if we do not resolve it very soon, has already laid the groundwork for its own failure.
The game theory scenario, with very careful implementation, might lead to something functionally closer to our modern economies.
Where everybody has a stake, and some sort of balance is at least possible.