Here’s why I disagree with the core claims of this post:
Its main thesis relies on what appears to be circular-reasoning to some degree: “Complex systems are hard to control” is weakly circular, given there is some amount of defining complexity as the degree of difficulty in understanding how a system works (which would in turn affect our ability to control it).
Its examples of challenges rely on mostly single attempts or one-pass resolutions, not looking at the long term view of the system when many attempts to control it are observed in sequence. Given systems have feedback loops, if our desire to control it is strong enough, there is likely to be some signal we are receiving from multiple attempts to control it. Sequential attempts are likely to produce some signal that can be used to guide subsequent attempts.
Arguments are fairly hand-wavy. For example, “due to interactions and feedback loops” is a commonly-cited reason for general bad things happening.
It argues that some key things in engineering are inadequate, such as “modularity.” The post very quickly states that the US government is modular, but that wasn’t enough to stop a few bad things from happening. It doesn’t at all talk about whether more bad things would have happened without said modularity.
GPT-4 apparently wrote several of the arguments in this post. Even if the arguments it came up with are weak, this is evidence that systems such as GPT-4 are relatively easy to control. This is also evidence against the hypothesis that large-scale data sets are worse as they are scaled up.
In general, belief that accidents cause more deaths than intentional killing is vastly overstated. The point of view of this post leans heavily on the idea that accidents are far more dangerous and far more important to worry about than intentional harm. For example, the list of worst accidents in human history reports several orders of magnitude fewer deaths than when people kill each other on purpose. This suggests that far less harm is caused by being unable to control a complex system than it is by outright conflict.
Here’s why I disagree with the core claims of this post:
Its main thesis relies on what appears to be circular-reasoning to some degree: “Complex systems are hard to control” is weakly circular, given there is some amount of defining complexity as the degree of difficulty in understanding how a system works (which would in turn affect our ability to control it).
Its examples of challenges rely on mostly single attempts or one-pass resolutions, not looking at the long term view of the system when many attempts to control it are observed in sequence. Given systems have feedback loops, if our desire to control it is strong enough, there is likely to be some signal we are receiving from multiple attempts to control it. Sequential attempts are likely to produce some signal that can be used to guide subsequent attempts.
Arguments are fairly hand-wavy. For example, “due to interactions and feedback loops” is a commonly-cited reason for general bad things happening.
It argues that some key things in engineering are inadequate, such as “modularity.” The post very quickly states that the US government is modular, but that wasn’t enough to stop a few bad things from happening. It doesn’t at all talk about whether more bad things would have happened without said modularity.
GPT-4 apparently wrote several of the arguments in this post. Even if the arguments it came up with are weak, this is evidence that systems such as GPT-4 are relatively easy to control. This is also evidence against the hypothesis that large-scale data sets are worse as they are scaled up.
In general, belief that accidents cause more deaths than intentional killing is vastly overstated. The point of view of this post leans heavily on the idea that accidents are far more dangerous and far more important to worry about than intentional harm. For example, the list of worst accidents in human history reports several orders of magnitude fewer deaths than when people kill each other on purpose. This suggests that far less harm is caused by being unable to control a complex system than it is by outright conflict.