(Epistemic status: gut reaction to the beginning of the post; thought out during writing)
It seems that a useful measure of complexity might arise from thinking of a phenomenon as a causal graph; specifically, how complex a phenomenon is can be described as “What fraction of causal components of the phenomenon need to break before the phenomenon ceases to occur, penalized by the complexity of the components.”
This has the benefit of being able to talk about a phenomenon at multiple levels of precision (with its components at multiple levels of abstraction) and still get useful bounds on the “complexity” of the phenomenon. It also has the benefit of matching the following intuitions:
a phenomenon were every atom needs to be in just the right place is more complex than a phenomenon with some redundancy/fault-tolerance
a phenomenon describable by a small number of complex components is more complex than one describable by the same number of simple components
a phenomenon describable by a small number of complex components is simpler than one which can only be described by a large number of simple components
It also has the following less intuitive properties:
a phenomenon that is “caused” by many simple components, but is extremely fault-tolerant is simpler than one caused by a few simple components, each absolutely necessary
whether a fault-tolerant phenomenon made up of fragile complex components is more or less complex than a fragile phenomenon made up of fault-tolerant complex components is up in the air; of course, if the components are of equal complexity, the former is simper, but if instead each component is made of the same number of atoms, the question doesn’t have an obvious (to me) answer
the “abstraction penalty” is a degree of freedom; I was imagining it as an additive or multiplicative constant, but different penalties (e.g. “+ 1“, “* 1.5”, or “+ 0”) may lead to differently interesting notions of complexity
you don’t need to ground out; that is, you can pick arbitrary atoms of complexity and still make meaningful relative claims of complexity
Is there prior work on such a model of complexity?
(Epistemic status: gut reaction to the beginning of the post; thought out during writing)
It seems that a useful measure of complexity might arise from thinking of a phenomenon as a causal graph; specifically, how complex a phenomenon is can be described as “What fraction of causal components of the phenomenon need to break before the phenomenon ceases to occur, penalized by the complexity of the components.”
This has the benefit of being able to talk about a phenomenon at multiple levels of precision (with its components at multiple levels of abstraction) and still get useful bounds on the “complexity” of the phenomenon. It also has the benefit of matching the following intuitions:
a phenomenon were every atom needs to be in just the right place is more complex than a phenomenon with some redundancy/fault-tolerance
a phenomenon describable by a small number of complex components is more complex than one describable by the same number of simple components
a phenomenon describable by a small number of complex components is simpler than one which can only be described by a large number of simple components
It also has the following less intuitive properties:
a phenomenon that is “caused” by many simple components, but is extremely fault-tolerant is simpler than one caused by a few simple components, each absolutely necessary
whether a fault-tolerant phenomenon made up of fragile complex components is more or less complex than a fragile phenomenon made up of fault-tolerant complex components is up in the air; of course, if the components are of equal complexity, the former is simper, but if instead each component is made of the same number of atoms, the question doesn’t have an obvious (to me) answer
the “abstraction penalty” is a degree of freedom; I was imagining it as an additive or multiplicative constant, but different penalties (e.g. “+ 1“, “* 1.5”, or “+ 0”) may lead to differently interesting notions of complexity
you don’t need to ground out; that is, you can pick arbitrary atoms of complexity and still make meaningful relative claims of complexity
Is there prior work on such a model of complexity?