This is interesting, because once you have AI you can use it to make a simulation like this feasable, by making the code more efficient, monitoring the AI’s thoughts, etc, and yet the “god AI” wouldn’t be able to influence the outside world in any meaningful way and it’s modification of the inside world would be heavily restricted as to just alerting admins about problems, making the simulation more efficient, and finding glitches.
All you have to do is feed the original AI with some basic parameters (humans look like this, cars have these properties, etc) and it can generate it’s own laws of physics and look for inconsistencies that way the AI would have a hard time figuring it out and abusing bugs.
I don’t think it’s necessary to make the AI’s human though. You could run a variety of different simulations. In some the AI’s would be led into a scenerio were they would have to do something or other (maybe make CEV) that would be useful in the real world, but you want to test it for hidden motives and traps in the simulation first before you implement it.
Despite a number of assumptions here that would have to be true first (like the development of AI in the first place) a real concern would be how you manage such an expiriment without the whole world knowing about it, or with the whole world knowing about it but make it safe so some terrorists can’t blow it up, hackers tamper with it, or spies steal it. The world’s reaction to AI is my biggest concern in any AI development scenario.
Despite a number of assumptions here that would have to be true first (like the development of AI in the first place)
A number of assumptions yes, but actually I see this is a viable route to creating AI, not something you do after you already have AI. Perhaps the biggest problem in AI right now is the grounding problem—actually truly learning what nouns and verbs mean. I think the most straightforward viable approach is simulation in virtual reality.
real concern would be how you manage such an expiriment without the whole world knowing about it, or with the whole world knowing about it but make it safe so some terrorists can’t blow it up, hackers tamper with it, or spies steal it. The world’s reaction to AI is my biggest concern in any AI development scenario.
I concur with your concern. However, I don’t know if such an experiment necessarily must be kept a secret (although that certainly is an option, and if/when governments take this seriously, it may be so).
On the other hand, at the moment most of the world seems to be blissfully unconcerned with AI atm.
This is interesting, because once you have AI you can use it to make a simulation like this feasable, by making the code more efficient, monitoring the AI’s thoughts, etc, and yet the “god AI” wouldn’t be able to influence the outside world in any meaningful way and it’s modification of the inside world would be heavily restricted as to just alerting admins about problems, making the simulation more efficient, and finding glitches.
All you have to do is feed the original AI with some basic parameters (humans look like this, cars have these properties, etc) and it can generate it’s own laws of physics and look for inconsistencies that way the AI would have a hard time figuring it out and abusing bugs.
I don’t think it’s necessary to make the AI’s human though. You could run a variety of different simulations. In some the AI’s would be led into a scenerio were they would have to do something or other (maybe make CEV) that would be useful in the real world, but you want to test it for hidden motives and traps in the simulation first before you implement it.
Despite a number of assumptions here that would have to be true first (like the development of AI in the first place) a real concern would be how you manage such an expiriment without the whole world knowing about it, or with the whole world knowing about it but make it safe so some terrorists can’t blow it up, hackers tamper with it, or spies steal it. The world’s reaction to AI is my biggest concern in any AI development scenario.
A number of assumptions yes, but actually I see this is a viable route to creating AI, not something you do after you already have AI. Perhaps the biggest problem in AI right now is the grounding problem—actually truly learning what nouns and verbs mean. I think the most straightforward viable approach is simulation in virtual reality.
I concur with your concern. However, I don’t know if such an experiment necessarily must be kept a secret (although that certainly is an option, and if/when governments take this seriously, it may be so).
On the other hand, at the moment most of the world seems to be blissfully unconcerned with AI atm.