This is a good write up of an interesting, if pessimistic argument. I’m not sold that this happens on a timescale that falls within ordinary human planning timelines of a century or two, but I’m not totally convinced that it doesn’t, either.
I’ve actually seen a somewhat different argument about the dangers of optimization. This was made by Vernor Vinge in his fantastic novel, A Deepness in the Sky.
The central idea was that optimization gave you more resources to use, but that sufficient optimization also destroyed “slack”, your margin for dealing with emergencies. For example, a highly optimized “Just in Time” manufacturing system is more profitable than idle warehouses full of inventory. But it things went wrong, you had very little buffer to draw upon. And that over generational time, if nothing else killed you, there was a temptation to optimize right up to the limits of your environment. This would mean that even small environmental shifts might cause cascading failures.
I’m not if this poses a true risk of civilizational collapse and large scale disaster. But it has made me appreciate the idea of slack and redundancy in systems. I recall that Netflix, for example used to run across 3 AWS regions, but it only needed 2 of them. Which meant they could lose a region and keep operating.
And this is certainly a risk that ops people know: Running too close to 100% capacity for too long means that failures tend to cascade rapidly and dramatically.
This is a good write up of an interesting, if pessimistic argument. I’m not sold that this happens on a timescale that falls within ordinary human planning timelines of a century or two, but I’m not totally convinced that it doesn’t, either.
I’ve actually seen a somewhat different argument about the dangers of optimization. This was made by Vernor Vinge in his fantastic novel, A Deepness in the Sky.
The central idea was that optimization gave you more resources to use, but that sufficient optimization also destroyed “slack”, your margin for dealing with emergencies. For example, a highly optimized “Just in Time” manufacturing system is more profitable than idle warehouses full of inventory. But it things went wrong, you had very little buffer to draw upon. And that over generational time, if nothing else killed you, there was a temptation to optimize right up to the limits of your environment. This would mean that even small environmental shifts might cause cascading failures.
I’m not if this poses a true risk of civilizational collapse and large scale disaster. But it has made me appreciate the idea of slack and redundancy in systems. I recall that Netflix, for example used to run across 3 AWS regions, but it only needed 2 of them. Which meant they could lose a region and keep operating.
And this is certainly a risk that ops people know: Running too close to 100% capacity for too long means that failures tend to cascade rapidly and dramatically.