Good post! While I don’t like ascribing things to conspiracies when they can explained by simple psychology, with climate change at least we pretty much know that there have also been deliberate effort to muddy the waters or hide important information by fossil fuel companies. Evidence of this stuff has surfaced, we know of think-tanks that have been blatantly funded, and so on. Propaganda is unfortunately also a thing. I don’t know if we are to expect it on the same scale for AI too—it’s a less entrenched industry, though the main operators (Google, Microsoft, Meta) are all very big, rich and powerful. They push AI but it’s not their only source of revenue and relevance. We’ll see, but it’s something to be wary of.
I think a big divide is also a psychological one. It’s something we’ve seen with COVID too. Some people are just more oriented to be safety minded, to weigh future risks at a lower discount rate. I wear a mask to this day in closed spaces (especially if crowded and poorly ventilated) because I’ve heard reports of what Long COVID is like and even at low odds, I don’t want any of that. I also am pretty willing to suffer small discomforts as I tend to just tune them out, and don’t much feel peer pressure. A different mix of these traits will produce different results. To me 30 years doesn’t seem nearly a long enough timeline to justify not worrying about something like X-risk. We’ve spent more energy trying to figure out how to bury nuclear waste for thousands of years.
Maybe one thing that’s easier to convince experts of is that even if aligned, AGI would be a very hot potato politically and economically, and generally hard to control. But many people seem to just go with “well, we need to make the best of it because it’s unavoidable”. Perhaps one of the biggest problems we face is this sort of tech fatalism, this notion that we can’t possibly even try to steer our development. I don’t think it’s entirely true—plenty of tech developments turned on small moments and decisions, if not on whether they would happen, in how they turned out (as for happening at all, I think the most glaring example would be Archimedes of Syracuse almost discovering calculus, but then being killed in his city’s siege, and calculus was discovered only some 1800 years later). We’ve just never really done it on purpose. A lot of heuristics also go “trying to slow down technology sounds oppressive/reactionary” and sure, it kinda does, but everyone who pursues AGI also admits that it’d be a technology like almost nothing else before.
Your points on deliberate misinformation are good ones. Whether it’s deliberate or not is muddied by polarized beliefs. If you work as an exec for a big company deploying dangerous AI, you’re motivated to believe it’s safe. If you can manage to keep believing that, you don’t even see it as misinformation when you launch an ad campaign to convince the public that it’s safe.
Your recent post AGI deployment as an act of aggression convinced me that it will indeed be a political hot potato, and helped inspire me to write this post. My current thinking is that it probably won’t be viewed as an act of aggression sufficient to do anything like military strikes, but it probably should. One related thought is that we might not even know when we’ve deployed AGI with the power to do something that shifts balance of power dramatically, like easily hack most government and communication networks. And if someone does know their new AI has that potential, they’ll launch it secretly.
I agree that technology research can be controlled. We’ve done it, to some degree, with genetic and viral research. I’m not sure if deployment can realistically be controlled once the research is done.
Good post! While I don’t like ascribing things to conspiracies when they can explained by simple psychology, with climate change at least we pretty much know that there have also been deliberate effort to muddy the waters or hide important information by fossil fuel companies. Evidence of this stuff has surfaced, we know of think-tanks that have been blatantly funded, and so on. Propaganda is unfortunately also a thing. I don’t know if we are to expect it on the same scale for AI too—it’s a less entrenched industry, though the main operators (Google, Microsoft, Meta) are all very big, rich and powerful. They push AI but it’s not their only source of revenue and relevance. We’ll see, but it’s something to be wary of.
I think a big divide is also a psychological one. It’s something we’ve seen with COVID too. Some people are just more oriented to be safety minded, to weigh future risks at a lower discount rate. I wear a mask to this day in closed spaces (especially if crowded and poorly ventilated) because I’ve heard reports of what Long COVID is like and even at low odds, I don’t want any of that. I also am pretty willing to suffer small discomforts as I tend to just tune them out, and don’t much feel peer pressure. A different mix of these traits will produce different results. To me 30 years doesn’t seem nearly a long enough timeline to justify not worrying about something like X-risk. We’ve spent more energy trying to figure out how to bury nuclear waste for thousands of years.
Maybe one thing that’s easier to convince experts of is that even if aligned, AGI would be a very hot potato politically and economically, and generally hard to control. But many people seem to just go with “well, we need to make the best of it because it’s unavoidable”. Perhaps one of the biggest problems we face is this sort of tech fatalism, this notion that we can’t possibly even try to steer our development. I don’t think it’s entirely true—plenty of tech developments turned on small moments and decisions, if not on whether they would happen, in how they turned out (as for happening at all, I think the most glaring example would be Archimedes of Syracuse almost discovering calculus, but then being killed in his city’s siege, and calculus was discovered only some 1800 years later). We’ve just never really done it on purpose. A lot of heuristics also go “trying to slow down technology sounds oppressive/reactionary” and sure, it kinda does, but everyone who pursues AGI also admits that it’d be a technology like almost nothing else before.
Your points on deliberate misinformation are good ones. Whether it’s deliberate or not is muddied by polarized beliefs. If you work as an exec for a big company deploying dangerous AI, you’re motivated to believe it’s safe. If you can manage to keep believing that, you don’t even see it as misinformation when you launch an ad campaign to convince the public that it’s safe.
Your recent post AGI deployment as an act of aggression convinced me that it will indeed be a political hot potato, and helped inspire me to write this post. My current thinking is that it probably won’t be viewed as an act of aggression sufficient to do anything like military strikes, but it probably should. One related thought is that we might not even know when we’ve deployed AGI with the power to do something that shifts balance of power dramatically, like easily hack most government and communication networks. And if someone does know their new AI has that potential, they’ll launch it secretly.
I agree that technology research can be controlled. We’ve done it, to some degree, with genetic and viral research. I’m not sure if deployment can realistically be controlled once the research is done.