As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Maybe you should teach your steelmanning skills, or make a post out of it.
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here. Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other. Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense? If so, can you clarify which is the case? If not, can you say more about why not?
I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true [implicitly assuming that X is not the same as Y, I am guessing].
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error
Sort of, yes. Except mind is also a model.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
OK.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other.
Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Beats me.
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Sort of, yes. Except mind is also a model.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
Nor do I.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
ie, realism explain how you can predict at all.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Fixed that for you.
I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.
But that’s a terrible argument. if you can’t justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
Shminux seems to be positing inputs and models at the least.
I think you quoted the wrong thing there, BTW.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
And inverted stupidity is..?