I can’t think of any particular reason why urban planning need to suck—the problems with it seem to be based on historical happenstance, not structural necessity.
That’s interesting though—are there any fields are that suck purely out of necessity? What’s the mechanism that causes it?
I’ve worked in the field of urban construction for 45 yrs or so, and I think qwern’s point is well taken. Urban planning meetings are complex affairs involving many competing interests. To expect a group of humans, with differing agendas to always make rational decisions is not going to happen in the near term.
Until something is worked out to improve human thinking and decision making we’ll have to keep muddling along. Having worked with it I am amazed we do as well as we do.
That’s interesting though—are there any fields are that suck purely out of necessity? What’s the mechanism that causes it?
I think the perspective to take would be one of ‘market failures’. Either there are large improvements to be had in urban planning, or there are not. If they are not, then the whole discussion is pointless, so let’s assume there are. If there are large improvements to be had, then by Coase’s theorem in a functioning market, they will get made. They are not getting made. Therefore, urban planning is not a functioning ‘efficient market’ in any sense. Why? Rent-seeking (literally), conflicting interests, politics, expense of construction, legacy costs of infrastructure (see any major construction in NYC or Boston compared to, say, Shanghai or Beijing), etc. I don’t know, ask a specialist why the dreams of urban planners so often die.
ask a specialist why the dreams of urban planners so often die.
I’m not a specialist, but the reason is obvious. Their dreams require other people to conform to the planners’ dreams. And most peoples ideas and dreams don’t. Urban planners are an extreme example of Thomas Sowell’s Vision of the Anointed.
Heard of bizarre systems? I’m not sure that the term carves reality at the joints, or that urban planning belongs in it, but it is a model for fields that suck purely out of necessity.
That’s an interesting link but it seems to be conflating a variety of different things. In particular, it asserts that chaotic systems need to have a lot of interacting components. This is not true. The doubling map is a very simple system that does’t have much in the way of components by any reasonable definition and is chaotic. There are a lot of examples like this.
Thanks; if you have the time, can you point out any other structural flaws in it? All I had was the vague feeling that it wasn’t as precise or rigorous as I like models to be when they claim to establish a natural type.
I don’t know enough about some of the other fields to reliably comment but the general impression I get is that this is part of a general pattern where there are technical terms that are being used imprecisely or terms with no actual strict meaning. They seem to confuse a number of different notions of what it means by a system to have ambiguity.. While they separate the different types of ambiguity somewhat explicitly it isn’t obvious to me that this is at all a helpful grouping. I don’t see why it should be useful to think of the ambiguity created by self-reference in at all a similar category as the ambiguity created by incomplete information.
Also there are a handful of lines which by the most obvious notions of the terms are just wrong:
Self-references and paradoxes are much discussed problems in Philosophy and Logic alike. A Strange Loop is a situation arising in type systems where you notice that A is a kind of B which is a kind of C which is a kind of A, closing the loop.
This is not a strange loop. This is just a cycle of equivalences. All the time in mathematics one has three types of things that one wants to show are the same and one shows this by showing that A → B, B-> C and C->A. This and variants thereof is a common proof strategy. For example, one of the most straightforward proofs that every prime that is 1 mod 4 is expressible as the sum of two perfect squares works this way.
Gödel’s Theorem states that any sufficiently powerful representational system can express paradoxes
By most notions of paradoxes this statement is false. I’m not sure even if they are trying to talk about the First or Second Incompleteness theorem. But neither of them says this. Both theorems shared essence is that morally speaking, sufficiently powerful axiomatic systems with certain technical properties can talk about themselves. That’s not the same claim at all. Moreover, that claim by itself isn’t that surprising. If someone had just come up with Godel numbering without Godel’s theorems in 1901, that probably would have been considered evidence that Hilbert’s program would be successful.
But neither theorem says anywhere that sufficiently powerful systems have paradoxical results. The technical phrasings (there are a variety of phrasing but I’m going to pick some of the easier ones) are: 1) If an axiomatic system where all proofs are listable and where one can with a primitive recursive function determine if a given string is a valid proof in the system, and the system is strong enough to model Peano Arithmetic, then the system is either inconsistent or incomplete (in that there is a statement one can make in the language of the system whose truth or falseness is not provable in the system. 2) A system satisfying the above conditions can only prove its own consistency if it is actually inconsistent. (I’m deliberately glossing over here what it means for a system to be able to prove its own consistency. Here be technical details.)
The section on Emergent Properties is also confusing. They seem to at some level talk about systems where the “emergent properties” arise from an aggregate whole (such as the water example) and examples where they emerge from systems that have diverse interlocking parts. There’s a common flaw of using “emergent” to mean “arises in a way we don’t understand” and Eliezer and others have criticized this. They don’t seem to be using that in this context since they include the example of a car. Unless they don’t think people understand cars? I’m not completely sure what they mean here.
Would there be any point in trying to make urban planning better? I’m thinking here of a Scott Aaronson quote.
I can’t think of any particular reason why urban planning need to suck—the problems with it seem to be based on historical happenstance, not structural necessity.
That’s interesting though—are there any fields are that suck purely out of necessity? What’s the mechanism that causes it?
I’ve worked in the field of urban construction for 45 yrs or so, and I think qwern’s point is well taken. Urban planning meetings are complex affairs involving many competing interests. To expect a group of humans, with differing agendas to always make rational decisions is not going to happen in the near term. Until something is worked out to improve human thinking and decision making we’ll have to keep muddling along. Having worked with it I am amazed we do as well as we do.
I think the perspective to take would be one of ‘market failures’. Either there are large improvements to be had in urban planning, or there are not. If they are not, then the whole discussion is pointless, so let’s assume there are. If there are large improvements to be had, then by Coase’s theorem in a functioning market, they will get made. They are not getting made. Therefore, urban planning is not a functioning ‘efficient market’ in any sense. Why? Rent-seeking (literally), conflicting interests, politics, expense of construction, legacy costs of infrastructure (see any major construction in NYC or Boston compared to, say, Shanghai or Beijing), etc. I don’t know, ask a specialist why the dreams of urban planners so often die.
I’m not a specialist, but the reason is obvious. Their dreams require other people to conform to the planners’ dreams. And most peoples ideas and dreams don’t. Urban planners are an extreme example of Thomas Sowell’s Vision of the Anointed.
deleted repeat
Heard of bizarre systems? I’m not sure that the term carves reality at the joints, or that urban planning belongs in it, but it is a model for fields that suck purely out of necessity.
That’s an interesting link but it seems to be conflating a variety of different things. In particular, it asserts that chaotic systems need to have a lot of interacting components. This is not true. The doubling map is a very simple system that does’t have much in the way of components by any reasonable definition and is chaotic. There are a lot of examples like this.
Thanks; if you have the time, can you point out any other structural flaws in it? All I had was the vague feeling that it wasn’t as precise or rigorous as I like models to be when they claim to establish a natural type.
I don’t know enough about some of the other fields to reliably comment but the general impression I get is that this is part of a general pattern where there are technical terms that are being used imprecisely or terms with no actual strict meaning. They seem to confuse a number of different notions of what it means by a system to have ambiguity.. While they separate the different types of ambiguity somewhat explicitly it isn’t obvious to me that this is at all a helpful grouping. I don’t see why it should be useful to think of the ambiguity created by self-reference in at all a similar category as the ambiguity created by incomplete information.
Also there are a handful of lines which by the most obvious notions of the terms are just wrong:
This is not a strange loop. This is just a cycle of equivalences. All the time in mathematics one has three types of things that one wants to show are the same and one shows this by showing that A → B, B-> C and C->A. This and variants thereof is a common proof strategy. For example, one of the most straightforward proofs that every prime that is 1 mod 4 is expressible as the sum of two perfect squares works this way.
By most notions of paradoxes this statement is false. I’m not sure even if they are trying to talk about the First or Second Incompleteness theorem. But neither of them says this. Both theorems shared essence is that morally speaking, sufficiently powerful axiomatic systems with certain technical properties can talk about themselves. That’s not the same claim at all. Moreover, that claim by itself isn’t that surprising. If someone had just come up with Godel numbering without Godel’s theorems in 1901, that probably would have been considered evidence that Hilbert’s program would be successful.
But neither theorem says anywhere that sufficiently powerful systems have paradoxical results. The technical phrasings (there are a variety of phrasing but I’m going to pick some of the easier ones) are: 1) If an axiomatic system where all proofs are listable and where one can with a primitive recursive function determine if a given string is a valid proof in the system, and the system is strong enough to model Peano Arithmetic, then the system is either inconsistent or incomplete (in that there is a statement one can make in the language of the system whose truth or falseness is not provable in the system. 2) A system satisfying the above conditions can only prove its own consistency if it is actually inconsistent. (I’m deliberately glossing over here what it means for a system to be able to prove its own consistency. Here be technical details.)
The section on Emergent Properties is also confusing. They seem to at some level talk about systems where the “emergent properties” arise from an aggregate whole (such as the water example) and examples where they emerge from systems that have diverse interlocking parts. There’s a common flaw of using “emergent” to mean “arises in a way we don’t understand” and Eliezer and others have criticized this. They don’t seem to be using that in this context since they include the example of a car. Unless they don’t think people understand cars? I’m not completely sure what they mean here.