Well, if something’s not actually happening, then I’m not actually seeing it happen.
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
I’m guessing that the analogy between you and an algorithm doesn’t hold strongly in your thinking about this, it’s the use of “you” in place of “algorithm” that does a lot of work in these judgements that wouldn’t happen for talking about an “algorithm”. So let’s talk about algorithms to establish common ground.
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
Yet it so happens that f(“green sky”) is 0. It’s not 1 and not nothing. There could be processes sensitive to this fact that don’t specifically evaluate f on this argument. And there are facts about what happens inside f with intermediate variables or states of some abstract machine that does the evaluation (procedure f’s experience of observing the argument and formulating a response to it), as it’s evaluated on this never-encountered argument, and these facts are never observed in actuality, yet they are well-defined by specifying f and the abstract machine.
You can ask: “but if it did happen, what would be your response?”—and that’s a reasonable question. But any answer to that question would indeed have to take as given that the event in question were in fact actually happening (otherwise the question is meaningless).
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens. Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
You seem to be saying: “yes, certain things that can happen are impossible”, which is very much counter to all ordinary usage.
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
I would not say “this is impossible and isn’t happening”.
This is mostly relevant for decisions between influencing one world and influencing another, possible when there are predictors looking from one world into the other. I don’t think behavior within-world (in ordinary situations) should significantly change depending on its share of reality, but also I don’t see a problem with noticing that the share of reality of some worlds is much smaller than for some other worlds. Another use is manipulating a predictor that imagines you seeing things that you (but not the predictor) know can’t happen, and won’t notice you noticing.
Well, if something’s not actually happening, then I’m not actually seeing it happen.
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
What do you mean, “proceeds in a specific way”? It doesn’t proceed at all. Because it’s not happening, and isn’t real.
if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
This seems wrong to me. If my response never happens, then it’s nothing; it’s the claim that it’s 1 that doesn’t type check, as does the claim that it’s 0. It can’t be either 1 or 0, because it doesn’t happen.
(In algorithm terms, if you like: what is the return value of a function that is never called? Nothing, because it’s never called and thus never returns anything. Will that function return 0? No. Will it return 1? Also no.)
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
There could be processes sensitive to this fact that don’t specifically evaluate f on this argument.
Please elaborate!
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens.
Indeed, but the question of what f(“green sky”) actually returns, certainly is meaningless if f(“green sky”) is never evaluated.
Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
I’m afraid I don’t see what this has to do with anything…
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
I strongly disagree that this matches ordinary usage!
… predictors looking from one world into the other …
I am not sure what you mean by this? (Or by the rest of your last paragraph, for that matter…)
Not actually, you seeing it happen isn’t real, but this unreality of seeing it happen proceeds in a specific way. It’s not indeterminate greyness, and not arbitrary.
If your response (that never happens) could be 0 or 1, it couldn’t be nothing. If it’s 0 (despite never having been observed to be 0), the claim that it’s 1 is false, and the claim that it’s nothing doesn’t type check.
I’m guessing that the analogy between you and an algorithm doesn’t hold strongly in your thinking about this, it’s the use of “you” in place of “algorithm” that does a lot of work in these judgements that wouldn’t happen for talking about an “algorithm”. So let’s talk about algorithms to establish common ground.
Let’s say we have a pure total procedure f written in some programming language, with the signature f : O → D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let’s say that in all plausible histories of the world, f is never evaluated on argument “green sky”. In this case I would say that it’s impossible for the argument (observation) to be “green sky”, procedure f is never evaluated with this argument in actuality.
Yet it so happens that f(“green sky”) is 0. It’s not 1 and not nothing. There could be processes sensitive to this fact that don’t specifically evaluate f on this argument. And there are facts about what happens inside f with intermediate variables or states of some abstract machine that does the evaluation (procedure f’s experience of observing the argument and formulating a response to it), as it’s evaluated on this never-encountered argument, and these facts are never observed in actuality, yet they are well-defined by specifying f and the abstract machine.
The question of what f(“green sky”) would evaluate to isn’t meaningless regardless of whether evaluation of f on the argument “green sky” is an event that in fact actually happens. Actually extant evidence for a particular answer, such as a proof that the answer is 0, is arguably also evidence of the evaluation having taken place. But reasoning about the answer doesn’t necessarily pin it down exactly, in which case the evaluation didn’t necessarily take place.
For example, perhaps we only know that f(“green sky”) is the same as g(“blue sky”), but don’t know what the values are. Actually proving this equality doesn’t in general require either f(“green sky”) or g(“blue sky”) to be actually evaluated.
Winning a billion dollars on the stock market by following the guidance of a random number generator technically “can happen”, but I feel it’s a central example of something impossible in ordinary usage of the word. I also wouldn’t say that it can happen, without the scare quotes, even though technically it can.
This is mostly relevant for decisions between influencing one world and influencing another, possible when there are predictors looking from one world into the other. I don’t think behavior within-world (in ordinary situations) should significantly change depending on its share of reality, but also I don’t see a problem with noticing that the share of reality of some worlds is much smaller than for some other worlds. Another use is manipulating a predictor that imagines you seeing things that you (but not the predictor) know can’t happen, and won’t notice you noticing.
What do you mean, “proceeds in a specific way”? It doesn’t proceed at all. Because it’s not happening, and isn’t real.
This seems wrong to me. If my response never happens, then it’s nothing; it’s the claim that it’s 1 that doesn’t type check, as does the claim that it’s 0. It can’t be either 1 or 0, because it doesn’t happen.
(In algorithm terms, if you like: what is the return value of a function that is never called? Nothing, because it’s never called and thus never returns anything. Will that function return 0? No. Will it return 1? Also no.)
(Reference for readers who may not be familiar with the relevant terminology, as I was not: Pure Functions and Total Functions.)
Please elaborate!
Indeed, but the question of what f(“green sky”) actually returns, certainly is meaningless if f(“green sky”) is never evaluated.
I’m afraid I don’t see what this has to do with anything…
I strongly disagree that this matches ordinary usage!
I am not sure what you mean by this? (Or by the rest of your last paragraph, for that matter…)