This sentence seems off. It isn’t clear what is meant by mechanical in this context other than to shove through a host of implied connotations.
Hrm. If I had used the word “procedural” rather than “mechanical”, would that have, do you think, prevented this impression?
Asserting that something non-trivial can be put in terms of math when one can’t do so on one’s own and doesn’t provide a reference seems less than conducive to good discussion.
If I am not a physicist, does that disqualify me from making claims about what a physicist would be relatively easily able to do? For example; “I’m not sufficient to the task of calculating my current relativistic mass—but anyone who works with general relativity would have no trouble doing this.”
So what am I missing with this element? Because I genuinely cannot see a difference between “a mathematician / AI worker could express in mathematical or computational terms the nature of recursive selection pressure” and “a general relativity physicist could calculate my relativistic mass relative to the Earth” in terms of the exceptionalism of either claim.
Is it perhaps that my wording appears to be implying that I meant more than “goals can be arranged in a graph of interdependent nodes that recursively update one another for weighting”?
Part of the reason why the sentence bothers me is that I’m a mathematician and it wasn’t obvious to me that there is a useful way of making the statement mathematically precise.
Is it perhaps that my wording appears to be implying that I meant more than “goals can be arranged in a graph of interdependent nodes that recursively update one another for weighting”?
So this is a little better and that may be part of it. Unfortunately, it isn’t completely obvious that this is true either. This is a property that we want goal systems to have in some form. It isn’t obvious that all goal systems in some broad sense will necessarily do so.
It isn’t obvious that all goal systems in some broad sense will necessarily do so.
“All” goal systems don’t have to; only some. The words I use to form this sentence do not comprise the whole of the available words of the English language—just the ones that are “interesting” to this sentence.
It would seem implicit that any computationally-based artificial intelligence would have a framework for computing. If that AI has volition, then it has goals. As we’re already discussing, topically, a recursively improving AI, then it has volition; direction. So we see that it by definition has to have computable goals.
Now, for my statement to be true—the original one that was causing the problems, that is—it’s only necessary that this be expressible in “mathematical / computational terms”. Those terms need not be practically useful—in much the same way that a “proof of concept” is not the same thing as a “finished product”.
Additionally, I somewhat have trouble grappling with the rejection of that original statement given the fact that values can be defined about “beliefs about what should be”—and we already express beliefs in Bayesian terms as a matter of course on this site.
What I mean here is, given the new goal of finding better ways for me to communicate to LWers—what’s the difference here? Why is it not okay for me to make statements that rest on commonly accepted ‘truths’ of LessWrong?
Is it the admission of my own incompetence to derive that information “from scratch”? Is it my admission to a non-mathematically-rigorous understanding of what is mathematically expressible?
(If it is that lattermore, then I find myself leaning towards the conclusion that the problem isn’t with me, but with the people who downvote me for it.)
I would downvote a comment that confidently asserted a claim of which I am dubious, when the author has no particular evidence for it, and admits to having no evidence for it.
This applies even if many people share the belief being asserted. I can’t downvote a common unsupported belief, but I can downvote the unsupported expression of it.
Hrm. If I had used the word “procedural” rather than “mechanical”, would that have, do you think, prevented this impression?
If I am not a physicist, does that disqualify me from making claims about what a physicist would be relatively easily able to do? For example; “I’m not sufficient to the task of calculating my current relativistic mass—but anyone who works with general relativity would have no trouble doing this.”
So what am I missing with this element? Because I genuinely cannot see a difference between “a mathematician / AI worker could express in mathematical or computational terms the nature of recursive selection pressure” and “a general relativity physicist could calculate my relativistic mass relative to the Earth” in terms of the exceptionalism of either claim.
Is it perhaps that my wording appears to be implying that I meant more than “goals can be arranged in a graph of interdependent nodes that recursively update one another for weighting”?
Part of the reason why the sentence bothers me is that I’m a mathematician and it wasn’t obvious to me that there is a useful way of making the statement mathematically precise.
So this is a little better and that may be part of it. Unfortunately, it isn’t completely obvious that this is true either. This is a property that we want goal systems to have in some form. It isn’t obvious that all goal systems in some broad sense will necessarily do so.
“All” goal systems don’t have to; only some. The words I use to form this sentence do not comprise the whole of the available words of the English language—just the ones that are “interesting” to this sentence.
It would seem implicit that any computationally-based artificial intelligence would have a framework for computing. If that AI has volition, then it has goals. As we’re already discussing, topically, a recursively improving AI, then it has volition; direction. So we see that it by definition has to have computable goals.
Now, for my statement to be true—the original one that was causing the problems, that is—it’s only necessary that this be expressible in “mathematical / computational terms”. Those terms need not be practically useful—in much the same way that a “proof of concept” is not the same thing as a “finished product”.
Additionally, I somewhat have trouble grappling with the rejection of that original statement given the fact that values can be defined about “beliefs about what should be”—and we already express beliefs in Bayesian terms as a matter of course on this site.
What I mean here is, given the new goal of finding better ways for me to communicate to LWers—what’s the difference here? Why is it not okay for me to make statements that rest on commonly accepted ‘truths’ of LessWrong?
Is it the admission of my own incompetence to derive that information “from scratch”? Is it my admission to a non-mathematically-rigorous understanding of what is mathematically expressible?
(If it is that lattermore, then I find myself leaning towards the conclusion that the problem isn’t with me, but with the people who downvote me for it.)
I would downvote a comment that confidently asserted a claim of which I am dubious, when the author has no particular evidence for it, and admits to having no evidence for it.
This applies even if many people share the belief being asserted. I can’t downvote a common unsupported belief, but I can downvote the unsupported expression of it.