Better hardware reduces the need for software to be efficient to be dangerous. I suspect on balance that yes, this makes development of said hardware more dangerous, and that not working on it can buy us some time. But the human brain runs on about 20 watts of sugar and isn’t anywhere near optimized for general intelligence, so we shouldn’t strictly need better hardware to make AGI, and IDK how much time it buys.
Also, better hardware makes more kinds of solutions feasible, and if aligned AGI requires more computational capacity than unaligned AI, or if better hardware makes it so ems happen first and can help of offset AGI risk until we solve alignment, then it seems possible to me that the arrow could point in the other direction.
Is working on better hardware computation dangerous?
I’m specifically thinking about Next Silicon, they make chips that are very good at fast serial computation, but not for things like neural networks
Thanks!
Better hardware reduces the need for software to be efficient to be dangerous. I suspect on balance that yes, this makes development of said hardware more dangerous, and that not working on it can buy us some time. But the human brain runs on about 20 watts of sugar and isn’t anywhere near optimized for general intelligence, so we shouldn’t strictly need better hardware to make AGI, and IDK how much time it buys.
Also, better hardware makes more kinds of solutions feasible, and if aligned AGI requires more computational capacity than unaligned AI, or if better hardware makes it so ems happen first and can help of offset AGI risk until we solve alignment, then it seems possible to me that the arrow could point in the other direction.