I think that sometimes, a problem is the optimal solution. A lot of rules are trade-offs between things, and it doesn’t help anyone to complain about the negatives of the trade-off which was made, as “fixing” it merely pushes the problem elsewhere. (Or alternatively, it requires a higher granularity of laws and regulations)
I could complain about criminals getting away with their crimes, but fixing that would require lowering the threshold of evidence, which would mean that more innocent people were unfairly punished. And if this ‘solution’ is implemented, then we’d hear complains about that.
Better solutions exist, but they require context-dependent judgement. A lot of the laws and regulations we follow are too general. In most contexts, it’s probably good that the WHO can’t enforce quarantines on sovereign states, and this just happened to be an exception.
I don’t know what to call this problem, bias variance tradeoff maybe?
In most contexts, it’s probably good that the WHO can’t enforce quarantines on sovereign states, and this just happened to be an exception.
You’re far too willing to concede to their reformulation of the problem!
You can imagine a version of the world where literally we exhausted every other avenue (or every other avenue that isn’t even more tyrannical) and the only option left is either what actually happened or give the WHO unchecked authority to enforce quarantines over sovereign states. But there are a number of actions that are improvements without needing to give WHO more authority in that way:
The WHO can advise sovereign states to enforce quarantines.
This is already what public health departments usually do, including within a country. They usually advise local governments rather than have operational or legislative control of the state use of force
They can offer to broker multilateral agreements so that well-resourced states can have quarantine ships on site, and offer to take in everyone.
Instead of looking at the WHO, we can ask why individual states are not doing quarantines
The WHO or another third party can offer individuals money to self-quarantine
We can set up an international apparatus (can be an international body, nonprofit, or for-profit company) to manage these quarantines
etc, etc
In general, there’s massive slippage between “this specific solution that you never proposed has sufficiently, undesirable consequences, therefore the best realistic world is the status quo”
This is something I’ve encountered often with this fallacy in particular. Otherwise rational, well-meaning, disinterested, etc, people often miss that Bob’s making a complete reformulation of a problem from complaining about a bigger issue to a narrower fault analysis reframing. For example, in an earlier draft of this post, where I said
Another example I’ve encountered recently is (anonymizing) when a friend complained about a prosaic safety problem at a major AI company that went unfixed for multiple months. Someone else with background information “usefully” chimed in with a long explanation of organizational restrictions and why the team responsible [...]
But what I (and my friend) cared about was the prosaic safety problem not being fixed! And what this says about the company’s ability to proactively respond to and fix future problems. Your internal team management was never a serious concern for us to begin with!
Claude’s review thought the AI safety example was kinda reasonable prioritization of the team (“You can still rebut with “rational prioritization that produces this outcome is itself the problem,” but the response isn’t a non sequitur the way the headline or hantavirus replies are.”). But they ignored/missed that I never brought up the specific team. Without privileged information, I don’t know the underlying dynamic here. It could be that the team should have more headcount or compute. It could be that a different team should handle it. It could be a different answer altogether. The complaint is about the company not the team[1].
But it’s very easy/tempting for the mistaken levels of the fault analysis framing to suck up oxygen.
Now if the defense was that the company made the right choice (eg “in an ideal world we’d want to go carefully and address all safety issues like that. But due to competitive pressures, and also because this specific prosaic safety issue can’t realistically pose an x-risk, our company is being very reasonable in not devoting more resources to solving this problem over other safety issues and racing”), that’d be a much more valid defense. I still think it’s wrong, but at least it’d be valid!
Sorry for the slow response. In order to prevent evil, we form giant entities and grant them the power to regulate society. These entities then turn evil, and fail to regulate themselves. Since they’re huge and powerful, we cannot stop them, either. More of the problem (more regulation, more people, more power to some agency) will necessarily fail at making things better.
I’m conceding because the status quo is terrible, and because I don’t think it can be made not-terrible. I give up on fixing it, and propose a re-design which prioritizes local (or at least local-first) solutions.
I do not think that this problem is in the domain of the WHO. In software engineering, each module does one job, and it has one responsibility. It’s considered bad design to have “god-classes” which are tied to too many things. We could keep working with spaghetti code, and trying to improve it, but I think it’s better to realize that the entire approach is bad in a way which is mathematically impossible to solve for good.
Individuals should keep themselves and others safe. Failing that, the local population should help eachother out. Failing that, the local power should do something, failing that, the state itself should be ordering a quarantine. If the WHO needs to step in, and use force at that, several things have already gone entirely wrong: If invidiuals don’t keep themselves safe, that’s a failure of education, or too much regulation preventing local solutions (similar to how it’s illegal to feed the homeless in many places). If the local population fails at helping one another out, that’s a lack of community and perhaps also a result of too many rules and a tendency to rely on authorities for problems that one could solve themselves. Etc.
In your next example, I think the problem might be cross-module failures. If a problem is a responsibility of more than one person, it often goes unfixed. Perhaps a single person does not have the authority to fix it (that is, they’d have to step outside of their usual scope of work and take responsibility for the consequences), or a single person cannot fix it without the help of others, leading to coordination issues.
This problem is less common when people have an investment in the thing at hand, and when they dare to actually act. But the modern world punishes agency, you’re rarely rewarded for competence, and you’re easily punished for not delegating the task to an authority (e.g. in Switzerland and a bunch of other countries, if you’re renting, you’re not allowed to install a dishwasher by yourself. You need to call a “professional”). These asymmetrical incentives force people to be passive, incompetent, reliant on authorities, and ignorant of the world around them. The solution is to allow people local freedom to fix local problems, and rewarding them if they succeed.
I’d like to see a world where anyone can do anything they want, as long as they’re the most competent person available for fixing that problem at that time, and in which they’re given freedom according to their level of competence. The company is like a program coded by somebody incompetent, nested inside a larger structure (society) which also has poorly thought out incentives (that is, it punishes good design principles)
I think it’s neither supported that this specific problem can’t be solved without a radical restructuring of society, nor particularly relevant to either my meta-level or object-level points. Bowing out for now.
I think that sometimes, a problem is the optimal solution. A lot of rules are trade-offs between things, and it doesn’t help anyone to complain about the negatives of the trade-off which was made, as “fixing” it merely pushes the problem elsewhere. (Or alternatively, it requires a higher granularity of laws and regulations)
I could complain about criminals getting away with their crimes, but fixing that would require lowering the threshold of evidence, which would mean that more innocent people were unfairly punished. And if this ‘solution’ is implemented, then we’d hear complains about that.
Better solutions exist, but they require context-dependent judgement. A lot of the laws and regulations we follow are too general. In most contexts, it’s probably good that the WHO can’t enforce quarantines on sovereign states, and this just happened to be an exception.
I don’t know what to call this problem, bias variance tradeoff maybe?
You’re far too willing to concede to their reformulation of the problem!
You can imagine a version of the world where literally we exhausted every other avenue (or every other avenue that isn’t even more tyrannical) and the only option left is either what actually happened or give the WHO unchecked authority to enforce quarantines over sovereign states. But there are a number of actions that are improvements without needing to give WHO more authority in that way:
The WHO can advise sovereign states to enforce quarantines.
This is already what public health departments usually do, including within a country. They usually advise local governments rather than have operational or legislative control of the state use of force
They can offer to broker multilateral agreements so that well-resourced states can have quarantine ships on site, and offer to take in everyone.
Instead of looking at the WHO, we can ask why individual states are not doing quarantines
The WHO or another third party can offer individuals money to self-quarantine
We can set up an international apparatus (can be an international body, nonprofit, or for-profit company) to manage these quarantines
etc, etc
In general, there’s massive slippage between “this specific solution that you never proposed has sufficiently, undesirable consequences, therefore the best realistic world is the status quo”
This is something I’ve encountered often with this fallacy in particular. Otherwise rational, well-meaning, disinterested, etc, people often miss that Bob’s making a complete reformulation of a problem from complaining about a bigger issue to a narrower fault analysis reframing. For example, in an earlier draft of this post, where I said
Claude’s review thought the AI safety example was kinda reasonable prioritization of the team (“You can still rebut with “rational prioritization that produces this outcome is itself the problem,” but the response isn’t a non sequitur the way the headline or hantavirus replies are.”). But they ignored/missed that I never brought up the specific team. Without privileged information, I don’t know the underlying dynamic here. It could be that the team should have more headcount or compute. It could be that a different team should handle it. It could be a different answer altogether. The complaint is about the company not the team[1].
But it’s very easy/tempting for the mistaken levels of the fault analysis framing to suck up oxygen.
Now if the defense was that the company made the right choice (eg “in an ideal world we’d want to go carefully and address all safety issues like that. But due to competitive pressures, and also because this specific prosaic safety issue can’t realistically pose an x-risk, our company is being very reasonable in not devoting more resources to solving this problem over other safety issues and racing”), that’d be a much more valid defense. I still think it’s wrong, but at least it’d be valid!
Sorry for the slow response. In order to prevent evil, we form giant entities and grant them the power to regulate society. These entities then turn evil, and fail to regulate themselves. Since they’re huge and powerful, we cannot stop them, either. More of the problem (more regulation, more people, more power to some agency) will necessarily fail at making things better.
I’m conceding because the status quo is terrible, and because I don’t think it can be made not-terrible. I give up on fixing it, and propose a re-design which prioritizes local (or at least local-first) solutions.
I do not think that this problem is in the domain of the WHO. In software engineering, each module does one job, and it has one responsibility. It’s considered bad design to have “god-classes” which are tied to too many things. We could keep working with spaghetti code, and trying to improve it, but I think it’s better to realize that the entire approach is bad in a way which is mathematically impossible to solve for good.
Individuals should keep themselves and others safe. Failing that, the local population should help eachother out. Failing that, the local power should do something, failing that, the state itself should be ordering a quarantine. If the WHO needs to step in, and use force at that, several things have already gone entirely wrong: If invidiuals don’t keep themselves safe, that’s a failure of education, or too much regulation preventing local solutions (similar to how it’s illegal to feed the homeless in many places). If the local population fails at helping one another out, that’s a lack of community and perhaps also a result of too many rules and a tendency to rely on authorities for problems that one could solve themselves. Etc.
In your next example, I think the problem might be cross-module failures. If a problem is a responsibility of more than one person, it often goes unfixed. Perhaps a single person does not have the authority to fix it (that is, they’d have to step outside of their usual scope of work and take responsibility for the consequences), or a single person cannot fix it without the help of others, leading to coordination issues.
This problem is less common when people have an investment in the thing at hand, and when they dare to actually act. But the modern world punishes agency, you’re rarely rewarded for competence, and you’re easily punished for not delegating the task to an authority (e.g. in Switzerland and a bunch of other countries, if you’re renting, you’re not allowed to install a dishwasher by yourself. You need to call a “professional”). These asymmetrical incentives force people to be passive, incompetent, reliant on authorities, and ignorant of the world around them. The solution is to allow people local freedom to fix local problems, and rewarding them if they succeed.
I’d like to see a world where anyone can do anything they want, as long as they’re the most competent person available for fixing that problem at that time, and in which they’re given freedom according to their level of competence. The company is like a program coded by somebody incompetent, nested inside a larger structure (society) which also has poorly thought out incentives (that is, it punishes good design principles)
I think it’s neither supported that this specific problem can’t be solved without a radical restructuring of society, nor particularly relevant to either my meta-level or object-level points. Bowing out for now.