Your points on deliberate misinformation are good ones. Whether it’s deliberate or not is muddied by polarized beliefs. If you work as an exec for a big company deploying dangerous AI, you’re motivated to believe it’s safe. If you can manage to keep believing that, you don’t even see it as misinformation when you launch an ad campaign to convince the public that it’s safe.
Your recent post AGI deployment as an act of aggression convinced me that it will indeed be a political hot potato, and helped inspire me to write this post. My current thinking is that it probably won’t be viewed as an act of aggression sufficient to do anything like military strikes, but it probably should. One related thought is that we might not even know when we’ve deployed AGI with the power to do something that shifts balance of power dramatically, like easily hack most government and communication networks. And if someone does know their new AI has that potential, they’ll launch it secretly.
I agree that technology research can be controlled. We’ve done it, to some degree, with genetic and viral research. I’m not sure if deployment can realistically be controlled once the research is done.
Your points on deliberate misinformation are good ones. Whether it’s deliberate or not is muddied by polarized beliefs. If you work as an exec for a big company deploying dangerous AI, you’re motivated to believe it’s safe. If you can manage to keep believing that, you don’t even see it as misinformation when you launch an ad campaign to convince the public that it’s safe.
Your recent post AGI deployment as an act of aggression convinced me that it will indeed be a political hot potato, and helped inspire me to write this post. My current thinking is that it probably won’t be viewed as an act of aggression sufficient to do anything like military strikes, but it probably should. One related thought is that we might not even know when we’ve deployed AGI with the power to do something that shifts balance of power dramatically, like easily hack most government and communication networks. And if someone does know their new AI has that potential, they’ll launch it secretly.
I agree that technology research can be controlled. We’ve done it, to some degree, with genetic and viral research. I’m not sure if deployment can realistically be controlled once the research is done.