Suppose that you are studying a biological system, and you suspect the presence of control systems. What sort of experiment might you attempt in order to demonstrate that some observable variable is being actively controlled at some, perhaps unknown, reference value?
Claude’s answer:
To demonstrate that a biological variable is under active control, you would want to design experiments that perturb the system and observe the response pattern. Here are key experimental approaches:
Perturbation experiments:
• Apply controlled disturbances that would normally push the variable away from its suspected set point
• Look for compensatory responses that actively restore the variable toward its original value
• The hallmark of control is that the system “fights back” against perturbations
Step response tests:
• Suddenly change an input that affects your variable of interest
• A controlled system will show characteristic response patterns: initial deviation followed by gradual return toward the set point (often with overshoot/oscillation)
• An uncontrolled system would simply shift to a new steady state
Dose-response curves:
• Apply increasing levels of perturbation
• Controlled systems often show saturation—small perturbations are well-compensated, but large ones overwhelm the control mechanism
• This reveals the control system’s operating range
Time course analysis:
• Monitor how quickly the system responds to perturbations
• Active control typically shows faster responses than passive equilibration
• Look for evidence of anticipatory responses or overshoot correction
Pharmacological intervention:
• If you can identify and block suspected control mechanisms (enzymes, receptors, feedback pathways), the variable should become more sensitive to perturbations
• The system should lose its ability to maintain homeostasis
Example: To test if blood glucose is actively controlled, you might administer glucose loads of varying sizes while measuring both glucose levels and insulin responses over time. Active control would show rapid insulin responses that scale with perturbation size and successfully return glucose toward baseline levels.
The key signature is negative feedback behavior—perturbations in one direction trigger responses in the opposite direction.
Thanks for these well-researched examples. I agree completely that Claude’s response is impressive and demonstrates a sophisticated ability to reason through causal relationships based on the vast amount of text it’s trained on, which includes explanations of physics and logical principles.
However, through the blog I am trying to argue that this still operates on a different plane than what a human does. The AI’s ‘causal understanding’ is an emergent property of it having read countless examples of causal reasoning (from textbooks, scientific papers, etc.). It’s essentially a brilliant pattern-matcher for the linguistic representation of causation. A human’s understanding, on the other hand, is built from embodied experience, hands-on experimentation, and the abductive reasoning I mentioned in the post.
So, while the AI can describe the causal relationship flawlessly, could it, for instance, infer a novel, unknown causal relationship from incomplete data in a real-world, messy scenario? That’s where I believe that humans still have a critical edge, and hence it’s the skill at risk of atrophy.
One powerful example is diagnosing a complex, multi-system failure in a large-scale industrial setting, such as a chemical plant or a power grid. A human operator might notice a seemingly unrelated set of events like a subtle temperature fluctuation in one part of the system, a delayed sensor reading in another, and a slight change in the viscosity of a fluid, and, based on years of embodied experience and intuitive pattern recognition, infer a novel, underlying causal chain that an AI wouldn’t detect.
One powerful example is diagnosing a complex, multi-system failure in a large-scale industrial setting, such as a chemical plant or a power grid. A human operator might notice a seemingly unrelated set of events like a subtle temperature fluctuation in one part of the system, a delayed sensor reading in another, and a slight change in the viscosity of a fluid and, based on years of embodied experience and intuitive pattern recognition, infer a novel, underlying causal chain that the current AI models won’t detect. An AI would struggle because:
Incomplete and Messy Data: The data from these systems is often noisy, incomplete, or not logged with the explicit labels an AI would need. A human can interpret a flickering gauge or an unusual sound as a critical data point, something an AI might not be able to easily do.
Abductive, Not Deductive, Reasoning: The operator isn’t working with a clear set of facts to deduce a single answer. They are performing abductive reasoning inferring the most likely explanation from a set of incomplete, messy observations, which requires intuition and experience.
No Existing “Textbook” Answer: The causal relationship might be unique to that specific plant’s design, age, or a recent maintenance change, so there is no existing causal model in its training data to draw from. The human expert is essentially creating a new causal model in real-time.
Claude does better and better! My question:
Claude’s answer:
Thanks for these well-researched examples. I agree completely that Claude’s response is impressive and demonstrates a sophisticated ability to reason through causal relationships based on the vast amount of text it’s trained on, which includes explanations of physics and logical principles.
However, through the blog I am trying to argue that this still operates on a different plane than what a human does. The AI’s ‘causal understanding’ is an emergent property of it having read countless examples of causal reasoning (from textbooks, scientific papers, etc.). It’s essentially a brilliant pattern-matcher for the linguistic representation of causation. A human’s understanding, on the other hand, is built from embodied experience, hands-on experimentation, and the abductive reasoning I mentioned in the post.
So, while the AI can describe the causal relationship flawlessly, could it, for instance, infer a novel, unknown causal relationship from incomplete data in a real-world, messy scenario? That’s where I believe that humans still have a critical edge, and hence it’s the skill at risk of atrophy.
One powerful example is diagnosing a complex, multi-system failure in a large-scale industrial setting, such as a chemical plant or a power grid. A human operator might notice a seemingly unrelated set of events like a subtle temperature fluctuation in one part of the system, a delayed sensor reading in another, and a slight change in the viscosity of a fluid, and, based on years of embodied experience and intuitive pattern recognition, infer a novel, underlying causal chain that an AI wouldn’t detect.
One powerful example is diagnosing a complex, multi-system failure in a large-scale industrial setting, such as a chemical plant or a power grid. A human operator might notice a seemingly unrelated set of events like a subtle temperature fluctuation in one part of the system, a delayed sensor reading in another, and a slight change in the viscosity of a fluid and, based on years of embodied experience and intuitive pattern recognition, infer a novel, underlying causal chain that the current AI models won’t detect. An AI would struggle because:
Incomplete and Messy Data: The data from these systems is often noisy, incomplete, or not logged with the explicit labels an AI would need. A human can interpret a flickering gauge or an unusual sound as a critical data point, something an AI might not be able to easily do.
Abductive, Not Deductive, Reasoning: The operator isn’t working with a clear set of facts to deduce a single answer. They are performing abductive reasoning inferring the most likely explanation from a set of incomplete, messy observations, which requires intuition and experience.
No Existing “Textbook” Answer: The causal relationship might be unique to that specific plant’s design, age, or a recent maintenance change, so there is no existing causal model in its training data to draw from. The human expert is essentially creating a new causal model in real-time.