seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.