It is mistaken to hold the ‘reasonable’ middle position that everything is neither ‘great’ nor ‘inadequate’ but ‘middling’. In fact, some things are at one extreme, and some things are at the other!
No, a position between extremes is a one-dimensional thing.
And he is talking about something which should be understood as at least two dimensional (this reminds me of paraconsistent logic which tends to be modeled by bi-lattices with a “material” dimension and an “informational” dimension).
No, what you say is like when people say about multi-objective optimization: “just consider a linear combination of your loss functions and optimize for that”.
But this reduction to one dimension loses too much information (de-emphasizes separate constraints and does not work well with swarms of solutions; similarly, one intermediate position does not work well for the “society of mind” multi-agent models of human mind, while separate (and not necessarily mutually consistent) positions work well).
One can consider a single position without losing anything if one allows it to vary in time. Like “let me believe this now”, “or let me believe that now”, or “yes, let me believe a mixture of positions X and Y, a(t)*X+b(t)*Y” (which would work OK if your a(t) and b(t) can vary with t in an arbitrary fashion, but not if they are constant coefficients).
One way or another, one wants to have an inner diversity of viewpoints rather than a unified compromise position. Then one can look at things from different angles.
There is one territory. There should be one map that corresponds to it. If one map predicts things well on one occasion and another predicts things well on another occasion, then both are clearly wanting and you need to combine them into an actually good map that isn’t surprised half the time.
I think the math your sharing is muddling things. Maybe try math for some kind of predictor/estimator function that is a function of whichever inputs are required to predict accurately, be it time or whatever.
There might be one territory. That is, itself, a meta-belief.
Some people think that multi-verse is much closer to our day-to-day life than it is customary to think (yes, this, by itself, is controversial, however it is something to keep in mind as a possibility). And then “one territory” would be stretching it quite a bit (although, yes, one can still reify that into the whole multi-verse as the territory, so it’s still “one territory”, just it would be larger than our typical estimates of the size of that territory).
I don’t know. Let’s consider an example. Eliezer thinks chicken don’t have qualia. Most of those who think about qualia at all think that chicken do have qualia.
I understand how OP would handle that. How do you propose to handle that?
The “assumption of one territory” presumably should imply that grown chicken normally either all have qualia or all don’t have qualia (unless we expect some strange undiscovered stratification between chicken).
So, what is one supposed to do for an “intermediate” object-level position? I mean, say I really want to know if chicken have qualia. And I don’t want to pre-decide the answer, and I notice the difference of opinions. How would you approach that?
This seems like a convoluted way to just believe a single functional reasonable position between extremes.
It is mistaken to hold the ‘reasonable’ middle position that everything is neither ‘great’ nor ‘inadequate’ but ‘middling’. In fact, some things are at one extreme, and some things are at the other!
I see now. It is both true that some people just need a friendly pointer to set them right and others are beyond saving.
No, a position between extremes is a one-dimensional thing.
And he is talking about something which should be understood as at least two dimensional (this reminds me of paraconsistent logic which tends to be modeled by bi-lattices with a “material” dimension and an “informational” dimension).
I disagree. “Some people are nice, some people are mean.” is a middle position between “everyone is nice” and “everyone is mean”.
No, what you say is like when people say about multi-objective optimization: “just consider a linear combination of your loss functions and optimize for that”.
But this reduction to one dimension loses too much information (de-emphasizes separate constraints and does not work well with swarms of solutions; similarly, one intermediate position does not work well for the “society of mind” multi-agent models of human mind, while separate (and not necessarily mutually consistent) positions work well).
One can consider a single position without losing anything if one allows it to vary in time. Like “let me believe this now”, “or let me believe that now”, or “yes, let me believe a mixture of positions X and Y, a(t)*X+b(t)*Y” (which would work OK if your a(t) and b(t) can vary with t in an arbitrary fashion, but not if they are constant coefficients).
One way or another, one wants to have an inner diversity of viewpoints rather than a unified compromise position. Then one can look at things from different angles.
There is one territory. There should be one map that corresponds to it. If one map predicts things well on one occasion and another predicts things well on another occasion, then both are clearly wanting and you need to combine them into an actually good map that isn’t surprised half the time.
I think the math your sharing is muddling things. Maybe try math for some kind of predictor/estimator function that is a function of whichever inputs are required to predict accurately, be it time or whatever.
There might be one territory. That is, itself, a meta-belief.
Some people think that multi-verse is much closer to our day-to-day life than it is customary to think (yes, this, by itself, is controversial, however it is something to keep in mind as a possibility). And then “one territory” would be stretching it quite a bit (although, yes, one can still reify that into the whole multi-verse as the territory, so it’s still “one territory”, just it would be larger than our typical estimates of the size of that territory).
I don’t know. Let’s consider an example. Eliezer thinks chicken don’t have qualia. Most of those who think about qualia at all think that chicken do have qualia.
I understand how OP would handle that. How do you propose to handle that?
The “assumption of one territory” presumably should imply that grown chicken normally either all have qualia or all don’t have qualia (unless we expect some strange undiscovered stratification between chicken).
So, what is one supposed to do for an “intermediate” object-level position? I mean, say I really want to know if chicken have qualia. And I don’t want to pre-decide the answer, and I notice the difference of opinions. How would you approach that?