Both seem well addressed by not building the thing “until you have a good plan for developing an actually aligned superintelligence”.
Of course, somebody else still will, but you adding to the number of potentially catastrophic programs doesn’t seem to improve that.
I mean, yes, but I’m addressing a confusion that’s already (mostly) conditioning on building on it.
Both seem well addressed by not building the thing “until you have a good plan for developing an actually aligned superintelligence”.
Of course, somebody else still will, but you adding to the number of potentially catastrophic programs doesn’t seem to improve that.
I mean, yes, but I’m addressing a confusion that’s already (mostly) conditioning on building on it.