What I meant was a conjunctive claim: ‘We want our AI’s beliefs to rapidly approach the truth’, and ‘the truth probably looks reasonably similar to contemporary physical theory’
Then I agree with you. This was all a misunderstanding. Read my original comment as a nitpick about your choice of words, then.
...
The truth does probably look reasonably similar to contemporary physical theory, but we can handle that by giving the AI the appropriate priors. We don’t need to make it actually rule stuff out entirely, even though it would probably work out OK if we did.
I don’t think it would be that difficult for us to formalize “monad.” Monads are actually pretty straightforward as I understand them. Ideas would be harder. At any rate, I don’t think we need to formalize lots of different fundamental ontologies and have it choose between them. Instead, all we need to do is formalize a general open-mindedness towards considering different ontologies. I admit this may be difficult, but it seems doable. Correct me if I’m wrong.
? Why exactly is it sillier to think our universe is made of morality-stuff than to think our universe is made of mind-stuff?
I didn’t exclude morality fluid because I thought it was sillier; I excluded it because I thought it wasn’t even a thing. You might as well have said “aslkdj theory” and then challenged me to explain why “aslkdj theory” is sillier than monads or ideas. It’s an illegitimate challenge, since you don’t mean anything by “aslkdj theory.” By contrast, there are actual bodies of literature on idealism and on monads, so it is legitimate to ask me what I think about them.
To put it another way: He who introduces a term decides what that term means. “Monads” and “Ideas,” having been introduced by very smart, thoughtful people and discussed by hundreds more, definitely are meaningful, at least meaningful enough to talk about. (Meaningfulness comes in degrees) If we talk about morality fluid, which I suspect is something you made up, then we rely on whatever meaning you assigned to it when you made it up—but since you (I suspect) assigned no meaning to it, we can’t even talk about it.
EDIT: So, in conclusion, if you tell me what morality fluid means, then I’ll tell you what I think about it.
Ah, OK. What I mean by ‘the world is made of morality’ is that physics reduces to (is fully, accurately, parsimoniously, asymetrically explainable in terms of) some structure isomorphic to the complex machinery we call ‘morality’. For example, it turns out that the mathematical properties of human-style Fairness are what explains the mathematical properties of dark energy or quantum gravity.
This doesn’t necessarily mean that the universe is ‘fair’ in any intuitive sense, though karmic justice might be another candidate for an unphysicalistic hypothesis. It’s more like the hypothesis that a simulation deity created our moral intuitions, then built our universe out of the patterns in that moral code. Like a somewhat less arbitrary variant on ‘I’m going to use a simple set of letter-to-note transition rules to convert the works of Shakespeare into a new musical piece’.
I think this view is fully analogous to idealism. If it makes complete sense to ask whether our world is made of mental stuff, it can’t be because our mental stuff is simultaneously a complex human brain operation and an irreducible simple; rather, it’s because the complex human brain operation could have been a key ingredient in the laws and patterns of our universe, especially if some god or simulator built our universe.
I don’t think we need to formalize lots of different fundamental ontologies and have it choose between them. Instead, all we need to do is formalize a general open-mindedness towards considering different ontologies. I admit this may be difficult, but it seems doable. Correct me if I’m wrong.
I don’t think I know enough to correct you. But I can express my doubts. I suspect ‘a general open-mindedness towards considering different ontologies’ can’t be formalized, or can’t be both formalized and humanly vetted. At a minimum, we’ll need to decide what gets to count as an ‘ontology’, which means drawing the line somewhere and declaring everything outside a certain set of boundaries nonsensical. And I’m skeptical that there’s any strongly principled way to determine that ‘colorless green ideas sleep furiously’ is contentless or nonsensical or ‘non-ontological’, while ‘the world is made of partless fundamental ideas’ is contentful and meaningful and picks out an ontology.
(Which doesn’t mean I think we should be rude or dismissive toward idealists in ordinary conversation. We should be very careful not to conflate the question ‘what questions should we treat with respect or inquire into in human social settings’ with the question ‘what questions should we program a Friendly AI to be able to natively consider’.)
Thanks for that explanation of mental stuff. My opinion? Sounds implausible, but fine, in the sense that we shouldn’t build our AI in a way that makes it incapable of considering that hypothesis. As an aside, I think it is less plausible than idealism, because it lacks the main cluster of motivations for idealism. The whole point of idealism is to be monist (and thus achieve ontological parsimony) whilst also “taking consciousness seriously.” As seriously as possible, in fact. Perhaps more seriously than is necessary, but anyhow that’s the appeal. Morality fluid takes morals seriously (maybe? Maybe not, actually, given your construction) but it doesn’t take consciousness any more seriously than physicalism, it seems. And, I think, it is more important that our theories take consciousness seriously than that they take morality seriously.
I suspect ‘a general open-mindedness towards considering different ontologies’ can’t be formalized, or can’t be both formalized and humanly vetted.
Humans do it. If intelligent humans can consider a hypothesis, an AI should be able to as well. In most cases it will quickly realize the hypothesis is silly or even self-contradictory, but at least it should be able to give them an honest try, rather than classify them as nonsense from the beginning.
At a minimum, we’ll need to decide what gets to count as an ‘ontology’, which means drawing the line somewhere and declaring everything outside a certain set of boundaries nonsensical.
Doesn’t seem to difficult to me. It isn’t really an ontology/nonontology distinction we are looking for, but a “hypothesis about the lowest level of description of the world / not that” distinction. Since the hypothesis itself states whether or not it is about the lowest level of description of the world, really all this comes down to is the distinction between a hypothesis and something other than a hypothesis. Right?
My general idea is, we don’t want to make our AI more limited than ourselves. In fact, we probably want our AI to reason “as we wish we ourselves would reason.” You don’t wish you were incapable of considering idealism, do you? If you do, why?
Then I agree with you. This was all a misunderstanding. Read my original comment as a nitpick about your choice of words, then.
...
The truth does probably look reasonably similar to contemporary physical theory, but we can handle that by giving the AI the appropriate priors. We don’t need to make it actually rule stuff out entirely, even though it would probably work out OK if we did.
I don’t think it would be that difficult for us to formalize “monad.” Monads are actually pretty straightforward as I understand them. Ideas would be harder. At any rate, I don’t think we need to formalize lots of different fundamental ontologies and have it choose between them. Instead, all we need to do is formalize a general open-mindedness towards considering different ontologies. I admit this may be difficult, but it seems doable. Correct me if I’m wrong.
I didn’t exclude morality fluid because I thought it was sillier; I excluded it because I thought it wasn’t even a thing. You might as well have said “aslkdj theory” and then challenged me to explain why “aslkdj theory” is sillier than monads or ideas. It’s an illegitimate challenge, since you don’t mean anything by “aslkdj theory.” By contrast, there are actual bodies of literature on idealism and on monads, so it is legitimate to ask me what I think about them.
To put it another way: He who introduces a term decides what that term means. “Monads” and “Ideas,” having been introduced by very smart, thoughtful people and discussed by hundreds more, definitely are meaningful, at least meaningful enough to talk about. (Meaningfulness comes in degrees) If we talk about morality fluid, which I suspect is something you made up, then we rely on whatever meaning you assigned to it when you made it up—but since you (I suspect) assigned no meaning to it, we can’t even talk about it.
EDIT: So, in conclusion, if you tell me what morality fluid means, then I’ll tell you what I think about it.
Ah, OK. What I mean by ‘the world is made of morality’ is that physics reduces to (is fully, accurately, parsimoniously, asymetrically explainable in terms of) some structure isomorphic to the complex machinery we call ‘morality’. For example, it turns out that the mathematical properties of human-style Fairness are what explains the mathematical properties of dark energy or quantum gravity.
This doesn’t necessarily mean that the universe is ‘fair’ in any intuitive sense, though karmic justice might be another candidate for an unphysicalistic hypothesis. It’s more like the hypothesis that a simulation deity created our moral intuitions, then built our universe out of the patterns in that moral code. Like a somewhat less arbitrary variant on ‘I’m going to use a simple set of letter-to-note transition rules to convert the works of Shakespeare into a new musical piece’.
I think this view is fully analogous to idealism. If it makes complete sense to ask whether our world is made of mental stuff, it can’t be because our mental stuff is simultaneously a complex human brain operation and an irreducible simple; rather, it’s because the complex human brain operation could have been a key ingredient in the laws and patterns of our universe, especially if some god or simulator built our universe.
I don’t think I know enough to correct you. But I can express my doubts. I suspect ‘a general open-mindedness towards considering different ontologies’ can’t be formalized, or can’t be both formalized and humanly vetted. At a minimum, we’ll need to decide what gets to count as an ‘ontology’, which means drawing the line somewhere and declaring everything outside a certain set of boundaries nonsensical. And I’m skeptical that there’s any strongly principled way to determine that ‘colorless green ideas sleep furiously’ is contentless or nonsensical or ‘non-ontological’, while ‘the world is made of partless fundamental ideas’ is contentful and meaningful and picks out an ontology.
(Which doesn’t mean I think we should be rude or dismissive toward idealists in ordinary conversation. We should be very careful not to conflate the question ‘what questions should we treat with respect or inquire into in human social settings’ with the question ‘what questions should we program a Friendly AI to be able to natively consider’.)
Thanks for that explanation of mental stuff. My opinion? Sounds implausible, but fine, in the sense that we shouldn’t build our AI in a way that makes it incapable of considering that hypothesis. As an aside, I think it is less plausible than idealism, because it lacks the main cluster of motivations for idealism. The whole point of idealism is to be monist (and thus achieve ontological parsimony) whilst also “taking consciousness seriously.” As seriously as possible, in fact. Perhaps more seriously than is necessary, but anyhow that’s the appeal. Morality fluid takes morals seriously (maybe? Maybe not, actually, given your construction) but it doesn’t take consciousness any more seriously than physicalism, it seems. And, I think, it is more important that our theories take consciousness seriously than that they take morality seriously.
Humans do it. If intelligent humans can consider a hypothesis, an AI should be able to as well. In most cases it will quickly realize the hypothesis is silly or even self-contradictory, but at least it should be able to give them an honest try, rather than classify them as nonsense from the beginning.
Doesn’t seem to difficult to me. It isn’t really an ontology/nonontology distinction we are looking for, but a “hypothesis about the lowest level of description of the world / not that” distinction. Since the hypothesis itself states whether or not it is about the lowest level of description of the world, really all this comes down to is the distinction between a hypothesis and something other than a hypothesis. Right?
My general idea is, we don’t want to make our AI more limited than ourselves. In fact, we probably want our AI to reason “as we wish we ourselves would reason.” You don’t wish you were incapable of considering idealism, do you? If you do, why?