First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Maybe you should teach your steelmanning skills, or make a post out of it.
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here. Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other. Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability.
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense? If so, can you clarify which is the case? If not, can you say more about why not?
I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true [implicitly assuming that X is not the same as Y, I am guessing].
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error
Sort of, yes. Except mind is also a model.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
Maybe we should organize a discussion where everyone has to take positions other than their own?
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so
Wait, there are nonrealists other than shiminux here?
Hrm.
First, let me apologize pre-emptively if I’m retreading old ground, I haven’t carefully read this whole discussion. Feel free to tell me to go reread the damned thread if I’m doing so. That said… my understanding of your account of existence is something like the following:
A model is a mental construct used (among other things) to map experiences to anticipated experiences. It may do other things along the way, such as represent propositions as beliefs, but it needn’t. Similarly, a model may include various hypothesized entities that represent certain consistent patterns of experience, such as this keyboard I’m typing on, my experiences of which consistently correlate with my experiences of text appearing on my monitor, responses to my text later appearing on my monitor, etc.
On your account, all it means to say “my keyboard exists” is that my experience consistently demonstrates patterns of that sort, and consequently I’m confident of the relevant predictions made by the set of models (M1) that have in the past predicted patterns of that sort, not-so-confident of relevant predictions made by the set of models (M2) that predict contradictory patterns, etc. etc. etc.
We can also say that M1 all share a common property K that allows such predictions. In common language, we are accustomed to referring to K as an “object” which “exists” (specifically, we refer to K as “my keyboard”) which is as good a way of talking as any though sloppy in the way of all natural language.
We can consequently say that M1 all agree on the existence of K, though of course that may well elide over many important differences in the ways that various models in M1 instantiate K.
We can also say that M1 models are more “accurate” than M2 models with respect to those patterns of experience that led us to talk about K in the first place. That is, M1 models predict relevant experience more reliably/precisely/whatever.
And in this way we can gradually converge on a single model (MR1), which includes various objects, and which is more accurate than all the other models we’re aware of. We can call MR1 “the real world,” by which we mean the most accurate model.
Of course, this doesn’t preclude uncovering a new model MR2 tomorrow which is even more accurate, at which point we would call MR2 “the real world”. And MR2 might represent K in a completely different way, such that the real world would now, while still containing the existence of my keyboard, contain it in a completely different way. For example, MR1 might represent K as a collection of atoms, and MR2 might represent K as a set of parameters in a configuration space, and when I transition from MR1 to MR2 the real world goes from my keyboard being a collection of atoms to my keyboard being a set of parameters in a configuration space.
Similarly, it doesn’t preclude our experiences starting to systematically change such that the predictions made by MR1 are no longer reliable, in which case MR stops being the most accurate model, and some other model (MR3) is the most accurate model, at which point we would call MR3 “the real world”. For example, MR3 might not contain K at all, and I would suddenly “realize” that there never was a keyboard.
All of which is fine, but the difficulty arises when after identifying MR1 as the real world we make the error of reifying MRn, projecting its patterns onto some kind of presumed “reality” R to which we attribute a kind of pseudo-existence independent of all models. Then we misinterpret the accuracy of a model as referring, not to how well it predicts future experience, but to how well it corresponds to R.
Of course, none of this precludes being mistaken about the real world… that is, I might think that MR1 is the real world, when in fact I just haven’t fully evaluated the predictive value of the various models I’m aware of, and if I were to perform such an evaluation I’d realize that no, actually, MR4 is the real world. And, knowing this, I might have various degrees of confidence in various models, which I can describe as “possible worlds.”
And I might have preferences as to which of those worlds is real. For example, MP1 and MP2 might both be possible worlds, and I am happier in MP1 than MP2, so I prefer MP1 be the real world. Similarly, I might prefer MP1 to MP2 for various other reasons other than happiness.
Which, again, is fine, but again we can make the reification error by assigning to R various attributes which correspond, not only to the real world (that is, the most accurate model), but to the various possible worlds MRx..y. But this isn’t a novel error, it’s just the extension of the original error of reification of the real world onto possible worlds.
That said, talking about it gets extra-confusing now, because there’s now several different mistaken ideas about reality floating around… the original “naive realist” mistake of positing R that corresponds to MR, the “multiverse” mistake of positing R that corresponds to MRx..y, etc. When I say to a naive realist that treating R as something that exists outside of a model is just an error, for example, the naive realist might misunderstand me as trying to say something about the multiverse and the relationships between things that “exist in the world” (outside of a model) and “exist in possible worlds” (outside of a model), which in fact has nothing at all to do with my point, which is that the whole idea of existence outside of a model is confused in the first place.
Have I understood your position?
As was the case once or twice before, you have explained what I meant better than I did in my earlier posts. Maybe you should teach your steelmanning skills, or make a post out of it.
The reification error you describe is indeed one of the fallacies a realist is prone to. Pretty benign initially, it eventually grows cancerously into the multitude of MRs whose accuracy is undefined, either by definition (QM interpretations) or through untestable ontologies, like “everything imaginable exists”. This promoting any M->R or a certain set {MP}->R seems forever meaningful if you fall for it once.
The unaddressed issue is the means of actualizing a specific model (that is, making it the most accurate). After all, if all you manipulate is models, how do you affect your future experiences?
I’ve thought about this, but on consideration the only part of it I understand explicitly enough to “teach” is Miller’s Law (the first one), and there’s really not much more to say about it than quoting it and then waiting for people to object. Which most people do, because approaching conversations that way seems to defeat the whole purpose of conversation for most people (convincing other people they’re wrong). My goal in discussions is instead usually to confirm that I understand what they believe in the first place. (Often, once I achieve that, I become convinced that they’re wrong… but rarely do I feel it useful to tell them so.)
The rest of it is just skill at articulating positions with care and precision, and exerting the effort to do so. A lot of people around here are already very good at that, some of them better than me.
Yes. I’m not sure what to say about that on your account, and that was in fact where I was going to go next.
Actually, more generally, I’m not sure what distinguishes experiences we have from those we don’t have in the first place, on your account, even leaving aside how one can alter future experiences.
After all, we’ve said that models map experiences to anticipated experiences, and that models can be compared based on how reliably they do that, so that suggests that the experiences themselves aren’t properties of the individual models (though they can of course be represented by properties of models). But if they aren’t properties of models, well, what are they? On your account, it seems to follow that experiences don’t exist at all, and there simply is no distinction between experiences we have and those we don’t have.
I assume you reject that conclusion, but I’m not sure how. On a naive realist’s view, rejecting this is easy: reality constrains experiences, and if I want to affect future experiences I affect reality. Accurate models are useful for affecting future experiences in specific intentional ways, but not necessary for affecting reality more generally… indeed, systems incapable of constructing models at all are still capable of affecting reality. (For example, a supernova can destroy a planet.)
(On a multiverse realist’s view, this is significantly more complicated, but it seems to ultimately boil down to something similar, where reality constrains experiences and if I want to affect the measure of future experiences, I affect reality.)
Another unaddressed issue derives from your wording: “how do you affect your future experiences?” I may well ask whether there’s anything else I might prefer to affect other than my future experiences (for example, the contents of models, or the future experiences of other agents). But I suspect that’s roughly the same problem for an instrumentalist as it is for a realist… that is, the arguments for and against solipsism, hedonism, etc. are roughly the same, just couched in slightly different forms.
Somewhere way upstream I said that I postulate experiences (I called them inputs), so they “exist” in this sense. We certainly don’t experience “everything”, so that’s how you tell “between experiences we have and those we don’t have”. I did not postulate, however, that they have an invisible source called reality, pitfalls of assuming which we just discussed. Having written this, I suspect that this is an uncharitable interpretation of your point, i.e. that you mean something else and I’m failing to Millerize it.
OK.
So “existence” properly refers to a property of subsets of models (e.g., “my keyboard exists” asserts that M1 contain K), as discussed earlier, and “existence” also properly refer to a property of inputs (e.g., “my experience of my keyboard sitting on my desk exists” and “my experience of my keyboard dancing the Macarena doesn’t exist” are both coherent, if perhaps puzzling, things to say), as discussed here.
Yes?
Which is not necessarily to say that “existence” refers to the same property of subsets of models and of inputs. It might, it might not, we haven’t yet encountered grounds to say one way or the other.
Yes?
OK. So far, so good.
And, responding to your comment about solipsism elsewhere just to keep the discussion in one place:
Well, I agree that when a realist solipsist says “Mine is the only mind that exists” they are using “exists” in a way that is meaningless to an instrumentalist.
That said, I don’t see what stops an instrumentalist solipsist from saying “Mine is the only mind that exists” while using “exists” in the ways that instrumentalists understand that term to have meaning.
That said, I still don’t quite understand how “exists” applies to minds on your account. You said here that “mind is also a model”, which I understand to mean that minds exist as subsets of models, just like keyboards do.
But you also agreed that a model is a “mental construct”… which I understand to refer to a construct created/maintained by a mind.
The only way I can reconcile these two statements is to conclude either that some minds exist outside of a model (and therefore have a kind of “existence” that is potentially distinct from the existence of models and of inputs, which might be distinct from one another) or that some models aren’t mental constructs.
My reasoning here is similar to how if you said “Red boxes are contained by blue boxes” and “Blue boxes are contained by red boxes” I would conclude that at least one of those statements had an implicit “some but not all” clause prepended to it… I don’t see how “For all X, X is contained by a Y” and “For all Y, Y is contained by an X” can both be true.
Does that make sense?
If so, can you clarify which is the case?
If not, can you say more about why not?
And what do you mean here by “true”, in an instrumental sense? Do you mean the mathematical truth (i.e. a well-formed finite string, given some set of rules), or the measurable truth (i.e. a model giving accurate predictions)? If it’s the latter, how would you test for it?
Beats me.
Just to be clear, are you suggesting that on your account I have no grounds for treating “All red boxes are contained by blue boxes AND all blue boxes are contained by red boxes” differently from “All red boxes are contained by blue boxes AND some blue boxes are contained by red boxes” in the way I discussed?
If you are suggesting that, then I don’t quite know how to proceed. Suggestions welcomed.
If you are not suggesting that, then perhaps it would help to clarify what grounds I have for treating those statements differently, which might more generally clarify how to address logical contradiction in an instrumentalist framework
Actually, thinking about this a little bit more, a “simpler” question might be whether it’s meaningful on this account to talk about minds existing. I think the answer is again that it isn’t, as I said about experiences above… models are aspects of a mind, and existence is an aspect of a subset of a model; to ask whether a mind exists is a category error.
If that’s the case, the question arises of whether (and how, if so) we can distinguish among logically possible minds, other than by reference to our own.
So perhaps I was too facile when I said above that the arguments for and against solipsism are the same for a realist and an instrumentalist. A realist rejects or embraces solipsism based on their position on the existence and moral value of other minds,, but an instrumentalist (I think?) rejects a priori the claim that other minds can meaningfully be said to exist or not exist, so presumably can’t base anything such (non)existence.
So I’m not sure what an instrumentalist’s argument rejecting solipsism looks like.
Sort of, yes. Except mind is also a model.
Well, to a solipsist hers is the only mind that exists, to an instrumentalist, as we have agreed, the term exist does not have a useful meaning beyond measurability. For example, the near-solipsist idea of a Boltzmann brain is not an issue for an instrumentalist, since it changes nothing in their ontology. Same deal with dreams, hallucinations and simulation.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
Nor do I.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
ie, realism explain how you can predict at all.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Fixed that for you.
I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.
But that’s a terrible argument. if you can’t justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
Shminux seems to be positing inputs and models at the least.
I think you quoted the wrong thing there, BTW.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
And inverted stupidity is..?
If I understand both your and shiminux’s comments, this might express the same thing in different terms:
We have experiences (“inputs”.)
We wish to optimize these inputs according to whatever goal structure.
In order to do this, we need to construct models to predict how our actions effect future inputs, based on patterns in how inputs have behaved in the past.
Some of these models are more accurate than others. We might call accurate models “real”.
However, the term “real” holds no special ontological value, and they might later prove inaccurate or be replaced by better models.
Thus, we have a perfectly functioning agent with no conception (or need for) a territory—there is only the map and the inputs. Technically, you could say the inputs are the territory, but the metaphor isn’t very useful for such an agent.
Huh, looks like we are, while not in agreement, at least speaking the same language. Not sure how Dave managed to accomplish this particular near-magical feat.
As before, I mostly attribute it to the usefulness of trying to understand what other people are saying.
I find it’s much more difficult to express my own positions in ways that are easily understood, though. It’s harder to figure out what is salient and where the vastest inferential gulfs are.
You might find it correspondingly useful to try and articulate the realist position as though you were trying to explain it to a fellow instrumentalist who had no experience with realists.
I actually tried this a few times, even started a post draft titled “explain realism to a baby AI”. In fact, I keep fighting my own realist intuition every time I don the instrumentalist hat. But maybe I am not doing it well enough.
Ah. Yeah, if your intuitions are realist, I expect it suffers from the same problem as expressing my own positions. It may be a useful exercise in making your realist intuitions explicit, though.
You are right. I will give it a go. Just because it’s obvious doesn’t mean it should not be explicit.
Maybe we should organize a discussion where everyone has to take positions other than their own? If this really helps clarity (and I think it does) it could end up producing insights much more difficult (if not actually impossible) to reach with normal discussion.
(Plus it would be good practice at the Ideological Turing Test, generalized empathy skills, avoiding the antpattern of demonizing the other side, and avoiding steelmanning arguments into forms that don’t threaten your own arguments (since they would be threatening the other side’s arguments, as it were.))
It seems to me to be one of the basic exercises in rationality, also known as “Devil’s advocate”. However, Eliezer dislikes it for some reason, probably because he thinks that it’s too easy to do poorly and then dismiss with a metaphorical self-congratulatory pat on one’s own back. Not sure how much of this is taught or practiced at CFAR camps.
Yup. In my experience, though, Devil’s Advocates are usually pitted against people genuinely arguing their cause, not other devil’s advocates.
Yeah, I remember being surprised by that reading the equences. He seemed to be describing acting as your own devil’s advocate, though, IIRC.
Well, if any nonrealists want to argue the realist position in response to my articulation of the instrumentalist position, they are certainly welcome to do so, and I can try to continue defending it… though I’m not sure how good a job of it I’ll do.
I was actually thinking of random topics, perhaps ones that are better understood by LW regulars, at least at first. Still …
Wait, there are nonrealists other than shiminux here?
Beats me.
Actually, that’s just the model I was already using. I noticed it was shorter than Dave’s, so I figured it might be useful.