It appears that by default, unless some perfect 100% bulletproof plan of aligning it is found, calling superintelligence a galaxy-destroying nuke is an understatement. So if there was some chance of a god forever watching over everything and preventing things from becoming too smart, I’d take it in a heartbeat.
Realistically, “watch over everything and prevent things from becoming too smart” is probably too difficult a goal to align, but perhaps a goal like “watch over everything and prevent programs with transformer-based architectures from running on silicone-based chips while keeping all other interference to a minimum” would actually be possible to define without everyone getting atomized. Such a goal would buy humanity some time and also make it obvious to everyone just how close to the edge we are, and how big the stakes.
It appears that by default, unless some perfect 100% bulletproof plan of aligning it is found, calling superintelligence a galaxy-destroying nuke is an understatement. So if there was some chance of a god forever watching over everything and preventing things from becoming too smart, I’d take it in a heartbeat.
Realistically, “watch over everything and prevent things from becoming too smart” is probably too difficult a goal to align, but perhaps a goal like “watch over everything and prevent programs with transformer-based architectures from running on silicone-based chips while keeping all other interference to a minimum” would actually be possible to define without everyone getting atomized. Such a goal would buy humanity some time and also make it obvious to everyone just how close to the edge we are, and how big the stakes.