It isn’t obvious that all goal systems in some broad sense will necessarily do so.
“All” goal systems don’t have to; only some. The words I use to form this sentence do not comprise the whole of the available words of the English language—just the ones that are “interesting” to this sentence.
It would seem implicit that any computationally-based artificial intelligence would have a framework for computing. If that AI has volition, then it has goals. As we’re already discussing, topically, a recursively improving AI, then it has volition; direction. So we see that it by definition has to have computable goals.
Now, for my statement to be true—the original one that was causing the problems, that is—it’s only necessary that this be expressible in “mathematical / computational terms”. Those terms need not be practically useful—in much the same way that a “proof of concept” is not the same thing as a “finished product”.
Additionally, I somewhat have trouble grappling with the rejection of that original statement given the fact that values can be defined about “beliefs about what should be”—and we already express beliefs in Bayesian terms as a matter of course on this site.
What I mean here is, given the new goal of finding better ways for me to communicate to LWers—what’s the difference here? Why is it not okay for me to make statements that rest on commonly accepted ‘truths’ of LessWrong?
Is it the admission of my own incompetence to derive that information “from scratch”? Is it my admission to a non-mathematically-rigorous understanding of what is mathematically expressible?
(If it is that lattermore, then I find myself leaning towards the conclusion that the problem isn’t with me, but with the people who downvote me for it.)
I would downvote a comment that confidently asserted a claim of which I am dubious, when the author has no particular evidence for it, and admits to having no evidence for it.
This applies even if many people share the belief being asserted. I can’t downvote a common unsupported belief, but I can downvote the unsupported expression of it.
“All” goal systems don’t have to; only some. The words I use to form this sentence do not comprise the whole of the available words of the English language—just the ones that are “interesting” to this sentence.
It would seem implicit that any computationally-based artificial intelligence would have a framework for computing. If that AI has volition, then it has goals. As we’re already discussing, topically, a recursively improving AI, then it has volition; direction. So we see that it by definition has to have computable goals.
Now, for my statement to be true—the original one that was causing the problems, that is—it’s only necessary that this be expressible in “mathematical / computational terms”. Those terms need not be practically useful—in much the same way that a “proof of concept” is not the same thing as a “finished product”.
Additionally, I somewhat have trouble grappling with the rejection of that original statement given the fact that values can be defined about “beliefs about what should be”—and we already express beliefs in Bayesian terms as a matter of course on this site.
What I mean here is, given the new goal of finding better ways for me to communicate to LWers—what’s the difference here? Why is it not okay for me to make statements that rest on commonly accepted ‘truths’ of LessWrong?
Is it the admission of my own incompetence to derive that information “from scratch”? Is it my admission to a non-mathematically-rigorous understanding of what is mathematically expressible?
(If it is that lattermore, then I find myself leaning towards the conclusion that the problem isn’t with me, but with the people who downvote me for it.)
I would downvote a comment that confidently asserted a claim of which I am dubious, when the author has no particular evidence for it, and admits to having no evidence for it.
This applies even if many people share the belief being asserted. I can’t downvote a common unsupported belief, but I can downvote the unsupported expression of it.