If anyone who wants can do a bit of the heavy computation (and get paid in crypto), this opens a vulnerability, you can offer to do some of the work, and return nonsense results.
Most AI’s aren’t put on the blockchain, because debugging becomes needlessly hard when cryptographic protocols make it slow and expensive to edit your code. And blockchain is basically the wrong tech anyway.
If the first AGI is unfriendly, then unless a friendly AI happens to be built like a few days later, yes it is too late. (If several AGI projects are very close, it may come down to some mix of which has more compute, a more efficient algorithm and being a day ahead) The unfriendly AI does whatever it wants. I don’t think it would be bribing courts and politicians because courts and politicians are kind of slow. Its plan is likely to be more.
Hack several big supercomputers, giving me plenty of compute and ensuring I won’t be shut off
Trick a bioresearch lab into making a particular DNA string and mixing it with various other ingredients.
Bootstrap self replicating nanotech. Turn all the earth into more nanotech, spread across universe.
If it can get self replicating nanotech in a week, with no one having any idea the AI exists until it grey goos earth, what advantage does the AI hope to gain in courts.
Whether or not this AI is in someway related to the blockchain is pretty irrelevant.
How is the AGI going to evade detection after it hacks into the super computer and starts eating up all it’s cycles? What’s going to stop the operators from simply air gapping the supercomputer? And furthermore Grey Goos are science fiction. There seems to be this notion on Less Wrong that AGIs are magic and can make anything you can imagine real. I’m trying to present a somewhat realistic computer security concern where an autonomous program designed to cause havoc is run on a computer that can’t be shut down, namely a Blockchain based machine.
How is the AGI going to evade detection after it hacks into the super computer and starts eating up all it’s cycles? What’s going to stop the operators from simply air gapping the supercomputer?
Lets say the supercomputer is doing something like simulating aerodynamics of new designs of wind turbines. The AI finds a maths trick that lets it calculate the same thing with half the compute, and runs its own code on the other half, while hacking or spoofing any performance monitoring tools.
Or maybe the AGI just pretends to be a human hacker doing cryptomining. It puts up a message saying “let us mine crypto for 1 week, and then this code will self delete and all will be well. Try to remove this code and I brick your supercomputer”.
Current supercomputers aren’t in 100% airgapped farady cages because people want to get data on and off them. The researcher studying wind turbines likes to be able to test their code without having to physically go in to the server room. Sure, if someone set up a supercomputer before the AI was made, and the only way to program it was to go into the server room and type on a keyboard directly connected to the supercomputer, no internet, and no memory sticks. Then this would be fairly secure against the AGI (in the early stages). It would also be practically unusable. You couldn’t download your large datasets onto the supercomputer, and you couldn’t load the latest version of numpy on there either.
And furthermore Grey Goos are science fiction.
Oh, that’s why so many science authors write dystopian fiction. They make that particular form of dystopia science fiction, thus magically stopping it happening in reality. Why if only enough authors had written scifi containing climate change early enough, that would have made climate change science fiction, forcing CO2 to not have any effect on global climate. (sarcasm)
There seems to be this notion on Less Wrong that AGIs are magic and can make anything you can imagine real.
Compared to monkeys or whatever, modern human tech seems pretty magic. I think that the limits of intelligence are really really high. There are lots and lots of clever things that can be done if you only work out how.
I also don’t think blockchain makes something much harder to shutdown. The AI could be a big project of a tech company, run by a boss who won’t listen to you. The AI could be running on some random developers desktop, and hiding its location with tor. In either case its hard to shut down.
“The AI finds a maths trick that lets it calculate the same thing with half the compute,”
You are not taking into account Computational Complexity Theory. There are fundamental limitations on what computers can do. Mathematical operations have lower bounds. After a certain point, there are no more clever tricks to discover.
I agree that it is in principle possible for software to be as efficient as possible, for their to be no further maths tricks that speed it up.
There are a fair few maths tricks, including some that are pretty subtle. Often humans have been running one algorithm for years and researchers find a faster one. We have not run out of new tricks to discover yet, and have no particular reason to think we will before ASI.
There are many supercomputers running many tasks. The AI doesn’t need to find a maths trick for fluid dynamics, it needs to find a maths trick for fluid dynamics or bitcoin mining or machine translation or … or any of the other tasks big computer are doing.
No one said the simulations needed to be perfect. The AI replaces the simulation with a faster but slightly worse one. It looks about the same to the humans watching their little animations. It would take years before the real wind turbine is built and found to be less efficient than predicted. And even then the humans will just blame lumpy bearings. (If the world hasn’t been destroyed by this point)
If anyone who wants can do a bit of the heavy computation (and get paid in crypto), this opens a vulnerability, you can offer to do some of the work, and return nonsense results.
Most AI’s aren’t put on the blockchain, because debugging becomes needlessly hard when cryptographic protocols make it slow and expensive to edit your code. And blockchain is basically the wrong tech anyway.
If the first AGI is unfriendly, then unless a friendly AI happens to be built like a few days later, yes it is too late. (If several AGI projects are very close, it may come down to some mix of which has more compute, a more efficient algorithm and being a day ahead) The unfriendly AI does whatever it wants. I don’t think it would be bribing courts and politicians because courts and politicians are kind of slow. Its plan is likely to be more.
Hack several big supercomputers, giving me plenty of compute and ensuring I won’t be shut off
Trick a bioresearch lab into making a particular DNA string and mixing it with various other ingredients.
Bootstrap self replicating nanotech. Turn all the earth into more nanotech, spread across universe.
If it can get self replicating nanotech in a week, with no one having any idea the AI exists until it grey goos earth, what advantage does the AI hope to gain in courts.
Whether or not this AI is in someway related to the blockchain is pretty irrelevant.
How is the AGI going to evade detection after it hacks into the super computer and starts eating up all it’s cycles? What’s going to stop the operators from simply air gapping the supercomputer? And furthermore Grey Goos are science fiction. There seems to be this notion on Less Wrong that AGIs are magic and can make anything you can imagine real. I’m trying to present a somewhat realistic computer security concern where an autonomous program designed to cause havoc is run on a computer that can’t be shut down, namely a Blockchain based machine.
Lets say the supercomputer is doing something like simulating aerodynamics of new designs of wind turbines. The AI finds a maths trick that lets it calculate the same thing with half the compute, and runs its own code on the other half, while hacking or spoofing any performance monitoring tools.
Or maybe the AGI just pretends to be a human hacker doing cryptomining. It puts up a message saying “let us mine crypto for 1 week, and then this code will self delete and all will be well. Try to remove this code and I brick your supercomputer”.
Current supercomputers aren’t in 100% airgapped farady cages because people want to get data on and off them. The researcher studying wind turbines likes to be able to test their code without having to physically go in to the server room. Sure, if someone set up a supercomputer before the AI was made, and the only way to program it was to go into the server room and type on a keyboard directly connected to the supercomputer, no internet, and no memory sticks. Then this would be fairly secure against the AGI (in the early stages). It would also be practically unusable. You couldn’t download your large datasets onto the supercomputer, and you couldn’t load the latest version of numpy on there either.
Oh, that’s why so many science authors write dystopian fiction. They make that particular form of dystopia science fiction, thus magically stopping it happening in reality. Why if only enough authors had written scifi containing climate change early enough, that would have made climate change science fiction, forcing CO2 to not have any effect on global climate. (sarcasm)
Compared to monkeys or whatever, modern human tech seems pretty magic. I think that the limits of intelligence are really really high. There are lots and lots of clever things that can be done if you only work out how.
https://www.lesswrong.com/posts/Jko7pt7MwwTBrfG3A/undiscriminating-skepticism
https://www.lesswrong.com/posts/XKcawbsB6Tj5e2QRK/is-molecular-nanotechnology-scientific
https://www.lesswrong.com/posts/P792Z4QA9dzcLdKkE/absurdity-heuristic-absurdity-bias
https://www.lesswrong.com/posts/h3vdnR34ZvohDEFT5/stranger-than-history
I also don’t think blockchain makes something much harder to shutdown. The AI could be a big project of a tech company, run by a boss who won’t listen to you. The AI could be running on some random developers desktop, and hiding its location with tor. In either case its hard to shut down.
“The AI finds a maths trick that lets it calculate the same thing with half the compute,”
You are not taking into account Computational Complexity Theory. There are fundamental limitations on what computers can do. Mathematical operations have lower bounds. After a certain point, there are no more clever tricks to discover.
I agree that it is in principle possible for software to be as efficient as possible, for their to be no further maths tricks that speed it up.
There are a fair few maths tricks, including some that are pretty subtle. Often humans have been running one algorithm for years and researchers find a faster one. We have not run out of new tricks to discover yet, and have no particular reason to think we will before ASI.
There are many supercomputers running many tasks. The AI doesn’t need to find a maths trick for fluid dynamics, it needs to find a maths trick for fluid dynamics or bitcoin mining or machine translation or … or any of the other tasks big computer are doing.
No one said the simulations needed to be perfect. The AI replaces the simulation with a faster but slightly worse one. It looks about the same to the humans watching their little animations. It would take years before the real wind turbine is built and found to be less efficient than predicted. And even then the humans will just blame lumpy bearings. (If the world hasn’t been destroyed by this point)