it’s the same reason in both cases: there are enormous groups you’re attacking by either approach. generally speaking, groups don’t like being attacked, and will attack you back, which usually undoes the benefit of attacking.
I agree I will be attacking many groups by continuing on my current plan
“which usually undoes the benefit of attacking”—reason?
in particular, starting a war over AI seems likely to simply lead to AI being pushed into use for military use much more rapidly.
I have already assumed that unless the state of affairs changes significantly, by 2026 or 2027, AI capabilities will be the number one focus of the entire US defence establishment. You can read the first bullet point in this redteaming document.
My actions don’t accelerate it much because it seems to be happening regardless of what I do.
sudden loss of privacy would likely result in mass death of some sort, when some sort of war gets started as a response
I agree that multiple governments may fall and war may be triggered.
it’s effectively impossible to have even a vague sense of whether it’d end up stabilizing on a good world after the easily predictable hell it would create at the beginning.
I agree that having very confident predictions is hard. I will at some point publish more on why I think it’ll be net good. I do think it’ll be net good with decent level of confidence.
there are certainly many bad things people do that it would make public, but if everyone knows the terrible stuff at once, is that even negative for people who do bad things?
Umm .. yes? I generally think people as a whole will coordinate to prevent many of the bad things from happening. One major problem today is whenever people coordinate to get something done, the leader and org end up abusing this trust. Earning trust will become easier in such a world.
Worst case if they fail to prevent bad things from now, atleast there is much more clarity about the state of affairs, so that someone else can later fix it.
and what about how malicious people would use it to attack pacifist targets?
You need to see the offence-defence balance, in terms of who has power in this world. I generally think most people can coordinate to prevent a few people from attacking a few other targets.
I agree I will be attacking many groups by continuing on my current plan
“which usually undoes the benefit of attacking”—reason?
I have already assumed that unless the state of affairs changes significantly, by 2026 or 2027, AI capabilities will be the number one focus of the entire US defence establishment. You can read the first bullet point in this redteaming document.
My actions don’t accelerate it much because it seems to be happening regardless of what I do.
I agree that multiple governments may fall and war may be triggered.
I agree that having very confident predictions is hard. I will at some point publish more on why I think it’ll be net good. I do think it’ll be net good with decent level of confidence.
Umm .. yes? I generally think people as a whole will coordinate to prevent many of the bad things from happening. One major problem today is whenever people coordinate to get something done, the leader and org end up abusing this trust. Earning trust will become easier in such a world.
Worst case if they fail to prevent bad things from now, atleast there is much more clarity about the state of affairs, so that someone else can later fix it.
You need to see the offence-defence balance, in terms of who has power in this world. I generally think most people can coordinate to prevent a few people from attacking a few other targets.