One possibility would for the malign intelligence to take over the world would be to orchestrate a nuclear war and be sufficiently hardened/advanced that it could survive and develop more quickly in the aftermath.
I personally don’t think writing down a goal gives us any predictability without a lot of work, which may or may not be possible. Specifying a goal assumes that the AIs perceptual/classification systems chops the world in the same way we would (which we don’t have a formal specification of, and changes over time). We would also need to solve the ontology identification problem.
I’m of the opinion that intelligence might need to be self-programming on a micro subconscious level, which might make self-improvement hard on a macro level. So i think we should plan for non-fooming scenarios.
One possibility would for the malign intelligence to take over the world would be to orchestrate a nuclear war and be sufficiently hardened/advanced that it could survive and develop more quickly in the aftermath.
I personally don’t think writing down a goal gives us any predictability without a lot of work, which may or may not be possible. Specifying a goal assumes that the AIs perceptual/classification systems chops the world in the same way we would (which we don’t have a formal specification of, and changes over time). We would also need to solve the ontology identification problem.
I’m of the opinion that intelligence might need to be self-programming on a micro subconscious level, which might make self-improvement hard on a macro level. So i think we should plan for non-fooming scenarios.