The problem here is that you have a central theme that, without sufficient justification, becomes more and more extravagant over a few paragraphs. So what start out as true-ish statements about a modest claim end up as false statements about an extreme claim.
What am I talking about?
Modest claim: “we cannot recognize any proposition as meaningful unless we can recognize its truth conditions.”
Getting less modest: “So in order for an AGI to be recognized as intelligent, it would have to share with us a familiarity with the world.”
More extreme claim: “Thus, in order to create an AGI, we would have to create a machine capable of thinking (in the way babies are) and then let it go about experiencing the world.”
Extravagant claim: “Thus, the one and only way to produce an FAI is to teach it to be good in the way we teach children to be good.”
A more restrained line of reasoning would go like this:
We cannot recognize any proposition as meaningful unless we can recognize that it has meaningful truth conditions.
So in order to know that an AI is thinking meaningful thoughts, we have to know that has some reference for truth conditions that we would find meaningful.
Thus, in order to create an AGI, we would have to create a machine capable of experiencing something we recognize as meaningful.
Given that we won’t be totally certain about the contents of friendliness when programming an FAI, we will want our AI to have meaningful thoughts about the concept of Friendliness itself.
Thus, we will need any FAI to be able to experience, directly or indirectly, a set of things meaningful to all of human desires and ethics.
Pretty much the same thing as “can recognize” means in your sentence—“meaningful to humans.” If you want a short definition, tough; humans are complicated. I should also say that I think the premise is false, even as I restated it, but at least it’s close to true.
I should also say that I think the premise is false, even as I restated it, but at least it’s close to true.
Can you explain why you think it’s false? I understand the burden of proof is on me here, but I could use some help thinking this through if you’re willing to grant me the time.
Around here I don’t think we worry too much about the burden of proof :P
Anyhow the objections mostly stem from the fact that we don’t live in a world of formal logic—we live in a world of probabilistic logic (on our good days). For example, if you know that gorblax has a 50% chance of meaning “robin,” and I say “look, a gorblax,” my statement isn’t completely meaningful or meaningless.
But how would we come to an estimate of its meaning in the first place? I suppose by understanding how the utterance constrains the expectations of the utterer, no? What is this other than knowing the truth conditions of the utterance? And truth conditions have to be just flatly something that can satisfy an anticipation. If we reraise the problem of meaning here (why should we?) we’ll run into a regress.
The problem here is that you have a central theme that, without sufficient justification, becomes more and more extravagant over a few paragraphs. So what start out as true-ish statements about a modest claim end up as false statements about an extreme claim.
What am I talking about?
Modest claim: “we cannot recognize any proposition as meaningful unless we can recognize its truth conditions.”
Getting less modest: “So in order for an AGI to be recognized as intelligent, it would have to share with us a familiarity with the world.”
More extreme claim: “Thus, in order to create an AGI, we would have to create a machine capable of thinking (in the way babies are) and then let it go about experiencing the world.”
Extravagant claim: “Thus, the one and only way to produce an FAI is to teach it to be good in the way we teach children to be good.”
A more restrained line of reasoning would go like this:
We cannot recognize any proposition as meaningful unless we can recognize that it has meaningful truth conditions.
So in order to know that an AI is thinking meaningful thoughts, we have to know that has some reference for truth conditions that we would find meaningful.
Thus, in order to create an AGI, we would have to create a machine capable of experiencing something we recognize as meaningful.
Given that we won’t be totally certain about the contents of friendliness when programming an FAI, we will want our AI to have meaningful thoughts about the concept of Friendliness itself.
Thus, we will need any FAI to be able to experience, directly or indirectly, a set of things meaningful to all of human desires and ethics.
What does the second ‘meaningful’ here refer to? I have in mind something like ‘truth conditions articulated in a Tarskian meta-language’.
Pretty much the same thing as “can recognize” means in your sentence—“meaningful to humans.” If you want a short definition, tough; humans are complicated. I should also say that I think the premise is false, even as I restated it, but at least it’s close to true.
Can you explain why you think it’s false? I understand the burden of proof is on me here, but I could use some help thinking this through if you’re willing to grant me the time.
Around here I don’t think we worry too much about the burden of proof :P
Anyhow the objections mostly stem from the fact that we don’t live in a world of formal logic—we live in a world of probabilistic logic (on our good days). For example, if you know that gorblax has a 50% chance of meaning “robin,” and I say “look, a gorblax,” my statement isn’t completely meaningful or meaningless.
But how would we come to an estimate of its meaning in the first place? I suppose by understanding how the utterance constrains the expectations of the utterer, no? What is this other than knowing the truth conditions of the utterance? And truth conditions have to be just flatly something that can satisfy an anticipation. If we reraise the problem of meaning here (why should we?) we’ll run into a regress.