This sounds like you are giving advice for someone that is rather interested in writing posts, not developing formal arguments… In logic we are searching for transitivity and rigor, not some modulo aesthetic preference, perhaps you overlooked that the content above is “somewhat” independent of its prosaic form.
”Most posts have flaws”
Where to find flaws in the argument presented… There could be several places where my reasoning can be challenged, the key is to do so without endorsing Yudkowsky’s views.
The game is a bit unfair, as it inherently easier to “destroy” positivistic arguments with first order inference: entropy, smoothness etc. is basically where all limitative theorems reside, arguments relying on these often collapse under scrutiny, most of the time the failure is baked into the system.
The whole “less wrong shtick” forgets that sometimes there is only “wrong”. As if it were so simple to convert smartness into casino winnings… Different people would be rich.
Note: I will slowly start to delete posts like this due to irrelevancy and redundancy.
“Editing with” AI does not consistently preserve semantics, or I wouldn’t ask. Incidentally, lesswrong has a human editor available, just ask in the intercom bubble.
There are many places where I’m confused, and suspect semantics were not preserved, and it would be much more efficient to show me the prompts than for me to nitpick all the little points. The post seems to overall make an interesting point. I am not telling you it’s a bad post. May I see the prompts?
Actually semantics “can” be conserved, an example.
- Change in semantics: “This is my second car” vs “this is my second skateboard”. - Change in syntactics: “This is my second car” vs “this is my third car”. - No formal change in the strict sense: “This is my second car” vs “This is—actually—my second car!”
Change during prompting: “Can be both, either, neither.”
Considering the prompting itself:
I copy/pasted a rough drafts (2-4 paragraphs?) and wrote something like: “Improve.” And then I edited again.
On the opaqueness:
The problem could be “limitative” thinking. Most of weak logics is, eliminative (e.g. implication A → B is elimination of A etc.). We can only look for contradictions and check for soundness.
On the post:
We can reduce it to a premise:
- “Every physically lawful protein has a well-defined ground-state structure determined by a finite reduction.” ≈ “Given unbounded time/energy, we can in principle compute the fold of any amino-acid sequence.”
and show, that from:
- “Biological evolution, using only local variation and selection, has discovered many stable, functional proteins.”
...it does NOT follow that:
- “Therefore the space of evolution-accessible proteins has simple, exploitable structure, (shallow energy landscapes) so we should expect to be able to build a practical ML system that reliably predicts their structures (or designs new ones) given enough data and compute.”
More context:
- Under a finite reduction (a model of folding with well-ordering property on a given lattice), protein folding is expressive enough to simulate arbitrary computation: Hence, the general folding problem is Turing-complete.
So: Every computable function must impose a final blur over that discrete lattice: a smoothing, regularization, or bounded-precision step that ensures computability. Without this blur, the system would risk confronting an unsolvable folding instance that would take forever. Excluding such cases implicitly constitutes a decision about an undecidable set. (Interestingly the ‘blur’ itself has also a ‘blur’, call it sigmoid, division etc.)
We can never determine exactly what potential structure is lost in this process, which regions of the formal folding landscape are suppressed or merged by the regularization that keeps the model finite and tractable, unless we train a better model that is sharper, but the problem repeats.
And yes, that means: We don’t know if ML will produce diamond bacteria. Could be, could be not.
On Bayesianism:
Since it is popular here, the failure could be explained as: No Bayesian system can coherently answer the meta-question:
- “What is the probability that I can correctly assign probabilities?”
This sounds like you are giving advice for someone that is rather interested in writing posts, not developing formal arguments… In logic we are searching for transitivity and rigor, not some modulo aesthetic preference, perhaps you overlooked that the content above is “somewhat” independent of its prosaic form.
”Most posts have flaws”
Where to find flaws in the argument presented… There could be several places where my reasoning can be challenged, the key is to do so without endorsing Yudkowsky’s views.
The game is a bit unfair, as it inherently easier to “destroy” positivistic arguments with first order inference: entropy, smoothness etc. is basically where all limitative theorems reside, arguments relying on these often collapse under scrutiny, most of the time the failure is baked into the system.
The whole “less wrong shtick” forgets that sometimes there is only “wrong”. As if it were so simple to convert smartness into casino winnings… Different people would be rich.
Note: I will slowly start to delete posts like this due to irrelevancy and redundancy.
(not enhanced)
“Editing with” AI does not consistently preserve semantics, or I wouldn’t ask. Incidentally, lesswrong has a human editor available, just ask in the intercom bubble.
There are many places where I’m confused, and suspect semantics were not preserved, and it would be much more efficient to show me the prompts than for me to nitpick all the little points. The post seems to overall make an interesting point. I am not telling you it’s a bad post. May I see the prompts?
On semantics:
Actually semantics “can” be conserved, an example.
- Change in semantics: “This is my second car” vs “this is my second skateboard”.
- Change in syntactics: “This is my second car” vs “this is my third car”.
- No formal change in the strict sense: “This is my second car” vs “This is—actually—my second car!”
Change during prompting: “Can be both, either, neither.”
Considering the prompting itself:
I copy/pasted a rough drafts (2-4 paragraphs?) and wrote something like: “Improve.” And then I edited again.
On the opaqueness:
The problem could be “limitative” thinking. Most of weak logics is, eliminative (e.g. implication A → B is elimination of A etc.). We can only look for contradictions and check for soundness.
On the post:
We can reduce it to a premise:
- “Every physically lawful protein has a well-defined ground-state structure determined by a finite reduction.” ≈ “Given unbounded time/energy, we can in principle compute the fold of any amino-acid sequence.”
and show, that from:
- “Biological evolution, using only local variation and selection, has discovered many stable, functional proteins.”
...it does NOT follow that:
- “Therefore the space of evolution-accessible proteins has simple, exploitable structure, (shallow energy landscapes) so we should expect to be able to build a practical ML system that reliably predicts their structures (or designs new ones) given enough data and compute.”
More context:
- Under a finite reduction (a model of folding with well-ordering property on a given lattice), protein folding is expressive enough to simulate arbitrary computation: Hence, the general folding problem is Turing-complete.
So: Every computable function must impose a final blur over that discrete lattice: a smoothing, regularization, or bounded-precision step that ensures computability. Without this blur, the system would risk confronting an unsolvable folding instance that would take forever. Excluding such cases implicitly constitutes a decision about an undecidable set. (Interestingly the ‘blur’ itself has also a ‘blur’, call it sigmoid, division etc.)
We can never determine exactly what potential structure is lost in this process, which regions of the formal folding landscape are suppressed or merged by the regularization that keeps the model finite and tractable, unless we train a better model that is sharper, but the problem repeats.
And yes, that means: We don’t know if ML will produce diamond bacteria. Could be, could be not.
On Bayesianism:
Since it is popular here, the failure could be explained as: No Bayesian system can coherently answer the meta-question:
- “What is the probability that I can correctly assign probabilities?”