I agree with the top part. I think it’s naive to believe that AI is helping anyone, but what I want to talk about is why this problem might be unsolvable (except by avoiding it entirely).
If you hate something and attempt to combat it, you will get closer to it rather than further away, in the manner which people refer to when they say “You actually love what you say you hate”. When I say “don’t think about pink elephants”, the more you try, the more you will fail, and this is because the brain doesn’t have subtraction and division, but only addition and multiplication.
You cannot learn about how to defend yourself against a problem without learning how to also cause the problem. When you learn self-defense you will also learn attacks. You cannot learn how to argue effectively with people who hold stupid worldviews without first understanding them and thus creating a model of the worldview within yourself as well.
Due to mechanics like these, it may be impossible to research “AI safety” in isolation. It’s probably better to use a neutral word like “AI capabilities” which include both the capacity for harm and defense against harm so that we don’t mislead ourselves with words. It can cause untold damage, much like viewing “good and evil” as opposites, rather than two sides of the same thing, has.
I also want to warn everyone that there seems to be an asymmetry in warfare which makes it so that attacking is strictly easier than defending. This ratio seems to increase as technology improves.
I agree with the top part. I think it’s naive to believe that AI is helping anyone, but what I want to talk about is why this problem might be unsolvable (except by avoiding it entirely).
If you hate something and attempt to combat it, you will get closer to it rather than further away, in the manner which people refer to when they say “You actually love what you say you hate”. When I say “don’t think about pink elephants”, the more you try, the more you will fail, and this is because the brain doesn’t have subtraction and division, but only addition and multiplication.
You cannot learn about how to defend yourself against a problem without learning how to also cause the problem. When you learn self-defense you will also learn attacks. You cannot learn how to argue effectively with people who hold stupid worldviews without first understanding them and thus creating a model of the worldview within yourself as well.
Due to mechanics like these, it may be impossible to research “AI safety” in isolation. It’s probably better to use a neutral word like “AI capabilities” which include both the capacity for harm and defense against harm so that we don’t mislead ourselves with words. It can cause untold damage, much like viewing “good and evil” as opposites, rather than two sides of the same thing, has.
I also want to warn everyone that there seems to be an asymmetry in warfare which makes it so that attacking is strictly easier than defending. This ratio seems to increase as technology improves.