Please note that the graph of per capita war deaths is on a log scale. The number moves over several orders of magnitude. One could certainly make the case that local spikes were sometimes caused by significant shifts in the offense-defense balance (like tanks and planes making offense easier for a while at the beginning of WWII). These shifts are pushed back to equilibrium over time, but personally I would be pretty unhappy about, say, deaths from pandemics spiking 4 orders of magnitude before returning to equilibrium.
I’d also guess the equilibrizing force is that there’s a human tendency to give up after a certain percentage of deaths that holds regardless how those deaths happen. This force ceases to be relevant outside certain political motivations for war.
Yeah, I think it’s the amplitude of the swings we need to be concerned with. The supposed mean-reversion tendency (or not) is not load bearing. A big enough swing still wipes us out.
Supposing we achieve AGI, someone will happen to get there first. At the very least, a human-level AI could be copied to another computer, and then you have two human-level AIs. If inference remains cheaper than training (seems likely, given how current LLMs work), then it could probably be immediately copied to thousands of computers, and you have a whole company of them. If they can figure out how to use existing compute to run themselves more efficiently, they’ll probably shortly thereafter stop operating on anything like human timescales and we get a FOOM. No-one else has time to catch up.
Please note that the graph of per capita war deaths is on a log scale. The number moves over several orders of magnitude. One could certainly make the case that local spikes were sometimes caused by significant shifts in the offense-defense balance (like tanks and planes making offense easier for a while at the beginning of WWII). These shifts are pushed back to equilibrium over time, but personally I would be pretty unhappy about, say, deaths from pandemics spiking 4 orders of magnitude before returning to equilibrium.
I’d also guess the equilibrizing force is that there’s a human tendency to give up after a certain percentage of deaths that holds regardless how those deaths happen. This force ceases to be relevant outside certain political motivations for war.
Yeah, I think it’s the amplitude of the swings we need to be concerned with. The supposed mean-reversion tendency (or not) is not load bearing. A big enough swing still wipes us out.
Supposing we achieve AGI, someone will happen to get there first. At the very least, a human-level AI could be copied to another computer, and then you have two human-level AIs. If inference remains cheaper than training (seems likely, given how current LLMs work), then it could probably be immediately copied to thousands of computers, and you have a whole company of them. If they can figure out how to use existing compute to run themselves more efficiently, they’ll probably shortly thereafter stop operating on anything like human timescales and we get a FOOM. No-one else has time to catch up.