That’s not the point though. Humans don’t want to defend, they want to press the big red button and will gain-of-function an AI to make the button bigger and redder.
Yes, sorry, some definitely will. But if you look at what is going on now, people are pushing in all kinds of dangerous directions with reckless abandon, even knowing logically that it might be a bad idea.
Rather than figure out what each of those means exactly, I’ll say “I don’t expect the psychological forces pushing towards research and release of more capabilities faster, to actually resist building the sort of tools that’d be useful for defending against AI.”
Counterargument: you can just defend against these AIs running amuck.
As long as most AIs are systematically trying to further human goals you don’t obviously get doomed (though the situation is scary).
There could be offense-defense inbalances, but there are also ‘tyranny of the majority’ advantages.
That’s not the point though. Humans don’t want to defend, they want to press the big red button and will gain-of-function an AI to make the button bigger and redder.
Huh? Definitely some humans will try to defend...
Yes, sorry, some definitely will. But if you look at what is going on now, people are pushing in all kinds of dangerous directions with reckless abandon, even knowing logically that it might be a bad idea.
I think “wants to defend” is actually pretty orthogonal to “wants to recklessly advance AI.”
hmm, orthogonal or just a different crowd/mindset?
Rather than figure out what each of those means exactly, I’ll say “I don’t expect the psychological forces pushing towards research and release of more capabilities faster, to actually resist building the sort of tools that’d be useful for defending against AI.”