I have. Rereading this particular entry… I see it very differently than I must have the first time. I keep thinking the effects might be explained by status-signals/confidence-heuristics, EG, if someone says something implausible then says something plausible, the plausible thing makes them look like a Reasonable Person and so you trust the implausible thing more. Yes, that shouldn’t have such a strong effect, it’s still a bias, but I wouldn’t call it a conjunctive reasoning bias.
Then there’re the situations where people hear a conjunctive and act as if what were said were a conditional, and that’s not so much a reasoning error as a listening error.
That aside,I think you might be misreading intent of this post? I am not saying “this will happen”. If a detail is off, imagine the other ways it could have gone, many of them would lead to the same place, or a similar place. Many of the things that could prevent us from arriving at this future (failure to develop safe, open protocols, for instance) are bad, and instead of saying “it wont happen because X will happen, it’s pointless thinking about” we should just start talking about X and how to prevent it. We probably have enough time to do something.
That aside, I think you might be misreading intent of this post? I am not saying “this will happen”. If a detail is off, imagine the other ways it could have gone, many of them would lead to the same place, or a similar place. Many of the things that could prevent us from arriving at this future (failure to develop safe, open protocols, for instance) are bad, and instead of saying “it wont happen because X will happen, it’s pointless thinking about” we should just start talking about X and how to prevent it. We probably have enough time to do something.
So, it seems to me that there’s three basic ways to interpret your post—three claims that it might be making (which certainly aren’t exclusive):
“This [i.e. the described scenario] could happen.”
“This [or something like it] will happen.”
“This should happen.”
There’s little sense in discussing them all at once (or, God forbid, conflating them), so let’s tackle them individually.
“This could happen.”
Sure, maybe it could. Lots of things could happen. There’s actually not much to discuss, here.
“This will happen.”
The scenario described is conjunctive. Split it apart into individual predictions, and we might be able to discuss them. (If you think either X or Y could lead to Z, fine; let’s forget both X and Y, and just examine the claim “Z will happen”.)
“This should happen.”
Why?
That is: why do you want this to happen? (I don’t. Should I? Why should I?) Also, what specifically is the particular desired outcome? (And what exactly is desirable about it?)
There’s little sense in discussing them all at once
On reflection, this is not as reasonable as it sounds. A working intelligence must entangle the search for desirable outcomes with the search for attainable outcomes on pretty much every level.
A prediction search process that only covers questions of fact, with no regard for questions of desirability (or undesirability) (in sum, questions of importance), will effectively be undirected. It will waste hours trying to figure out minutia about things that don’t matter. For instance, an undirected human intelligence might spend hours trying to model all of the mean things things that unimaginative people might think about it and how to rebuff them, without noticing that these peoples’ opinions do not matter.
I think… explicitly distinguishing the conditionals from the conjunctives in this little look forward of mine is more work than I’m willing to do today.
A working intelligence must entangle the search for desirable outcomes with the search for attainable outcomes on pretty much every level.
“Entangle” how? Surely not by confusing or conflating the concepts…?
A prediction search process that only covers questions of fact, with no regard for questions of desirability (or undesirability) (in sum, questions of importance), will effectively be undirected. It will waste hours trying to figure out minutia about things that don’t matter.
What is a “prediction search process”…?
I think… explicitly distinguishing the conditionals from the conjunctives in this little look forward of mine is more work than I’m willing to do today.
Fair enough, but… well, look, you seem to be reading some very complicated things into my comment. All I want to know is:
What, actually, are you claiming? (Are you claiming anything at all? I assume so, since you asked for “refutations”.)
I listed three kinds of claims you might be making. Are you, indeed, making any of these three sorts of claims? If yes, which one(s)? And what are the claims exactly?
There’s no need to go off on any tangents about “prediction search process” or “undirected human intelligence” or anything like that. Really, I’m asking very straightforward questions here!
This is a paraphrasing of what intelligence is. If you can implement a search for useful predictions that generalizes over some relatively broad domain of things that can be predicted over, that’s AI-complete. That will be an AI. Is this not a common idea?
I am not conflating desirability with expectation. I will always speak of them in the same breath because they are entangled, not just for the technical reasons I expounded, but for deep decision-theoretic reasons that the field has only started to really get a grasp recently. There are many important situations where people/agents/crowds have to make a decision about what they will believe and what they want to be true simultaniously, because the beliefs/protocols/actions are a direct logical consequence of the desires. For instance, we attribute value to money because we want money to have value. If The Market comes to see some financial system as primarily parasitic or defective, and if they are good LDT agents, that system’s currency will not be accepted by The Market after that point. The truth- whether it will be valued- will change because there are situations in which it the truth is a direct consequence of desires.
Which is not especially relevant.
I’m sorry, I’ve already explained the intent of the post to you. You didn’t find the explanation satisfactorally reductive?.. I don’t really know how to go any further there. I’m not sure how to go about justifying like, a conversation style. I understand where you’re coming from. Personally I don’t find the way of engaging that you’re looking for to be fun or productive. You want me to drag you every step of the way.. That’s what it feels like, anyway. I can’t be the one to do that. I only have time to give a summary.
If that’s not interesting to people, if I haven’t motivated a deeper exploration or if it’s not evident to enough people that this would be a useful framework for discussion, well, okay. Maybe I have alienated people who know this area well enoug to confirm or refute, or maybe this isn’t the right medium for that.
While I think you are not wrong about the entanglement of intellectual exploration and truth-value, I do think you did not really explain the intent of the post. You only really said half a sentence about it, and that one did seem pretty weird to me:
...and instead of saying “it wont happen because X will happen, it’s pointless thinking about” we should just start talking about X and how to prevent it. We probably have enough time to do something.
This seems to indicate that your goal is to get the people on this site to start working towards the future you described, which seems like a valid and fine goal. However, at this point I am not particularly convinced that working on the future you described is tractable, something I can influence particularly much, or something I should care about. It sure sounds pretty cool, but there are a lot of visions for the future that sound pretty cool.
You want me to drag you every step of the way.. That’s what it feels like, anyway. I can’t be the one to do that. I only have time to give a summary.
Don’t have time?! You typed a way longer comment than you would have needed to type if you had just answered my questions!
I’m sorry, I’ve already explained the intent of the post to you. You didn’t find the explanation satisfactorally reductive?
You really didn’t explain it, though. You said “you might be misreading the intent of the post”… and then didn’t follow that up with a statement of what the intent of the post was.
Again, I’m asking very simple questions. You seem to be avoiding answering them. I’m not sure why. It seems like it would take you very little effort to do so, much less effort than making the comments you are making.
Scenario planning is a common way to think about the future. It’s not about arguing that a specific future has a high probability and will or should happen.
Could you be more specific?
Sure, but first let me ask—have you read the Sequences? (Asking to get a feel for what to assume you know and don’t know, etc.)
I have. Rereading this particular entry… I see it very differently than I must have the first time. I keep thinking the effects might be explained by status-signals/confidence-heuristics, EG, if someone says something implausible then says something plausible, the plausible thing makes them look like a Reasonable Person and so you trust the implausible thing more. Yes, that shouldn’t have such a strong effect, it’s still a bias, but I wouldn’t call it a conjunctive reasoning bias.
Then there’re the situations where people hear a conjunctive and act as if what were said were a conditional, and that’s not so much a reasoning error as a listening error.
That aside, I think you might be misreading intent of this post? I am not saying “this will happen”. If a detail is off, imagine the other ways it could have gone, many of them would lead to the same place, or a similar place. Many of the things that could prevent us from arriving at this future (failure to develop safe, open protocols, for instance) are bad, and instead of saying “it wont happen because X will happen, it’s pointless thinking about” we should just start talking about X and how to prevent it. We probably have enough time to do something.
So, it seems to me that there’s three basic ways to interpret your post—three claims that it might be making (which certainly aren’t exclusive):
“This [i.e. the described scenario] could happen.”
“This [or something like it] will happen.”
“This should happen.”
There’s little sense in discussing them all at once (or, God forbid, conflating them), so let’s tackle them individually.
“This could happen.”
Sure, maybe it could. Lots of things could happen. There’s actually not much to discuss, here.
“This will happen.”
The scenario described is conjunctive. Split it apart into individual predictions, and we might be able to discuss them. (If you think either X or Y could lead to Z, fine; let’s forget both X and Y, and just examine the claim “Z will happen”.)
“This should happen.”
Why?
That is: why do you want this to happen? (I don’t. Should I? Why should I?) Also, what specifically is the particular desired outcome? (And what exactly is desirable about it?)
On reflection, this is not as reasonable as it sounds. A working intelligence must entangle the search for desirable outcomes with the search for attainable outcomes on pretty much every level.
A prediction search process that only covers questions of fact, with no regard for questions of desirability (or undesirability) (in sum, questions of importance), will effectively be undirected. It will waste hours trying to figure out minutia about things that don’t matter. For instance, an undirected human intelligence might spend hours trying to model all of the mean things things that unimaginative people might think about it and how to rebuff them, without noticing that these peoples’ opinions do not matter.
I think… explicitly distinguishing the conditionals from the conjunctives in this little look forward of mine is more work than I’m willing to do today.
“Entangle” how? Surely not by confusing or conflating the concepts…?
What is a “prediction search process”…?
Fair enough, but… well, look, you seem to be reading some very complicated things into my comment. All I want to know is:
What, actually, are you claiming? (Are you claiming anything at all? I assume so, since you asked for “refutations”.)
I listed three kinds of claims you might be making. Are you, indeed, making any of these three sorts of claims? If yes, which one(s)? And what are the claims exactly?
There’s no need to go off on any tangents about “prediction search process” or “undirected human intelligence” or anything like that. Really, I’m asking very straightforward questions here!
This is a paraphrasing of what intelligence is. If you can implement a search for useful predictions that generalizes over some relatively broad domain of things that can be predicted over, that’s AI-complete. That will be an AI. Is this not a common idea?
I am not conflating desirability with expectation. I will always speak of them in the same breath because they are entangled, not just for the technical reasons I expounded, but for deep decision-theoretic reasons that the field has only started to really get a grasp recently. There are many important situations where people/agents/crowds have to make a decision about what they will believe and what they want to be true simultaniously, because the beliefs/protocols/actions are a direct logical consequence of the desires. For instance, we attribute value to money because we want money to have value. If The Market comes to see some financial system as primarily parasitic or defective, and if they are good LDT agents, that system’s currency will not be accepted by The Market after that point. The truth- whether it will be valued- will change because there are situations in which it the truth is a direct consequence of desires.
Which is not especially relevant.
I’m sorry, I’ve already explained the intent of the post to you. You didn’t find the explanation satisfactorally reductive?.. I don’t really know how to go any further there. I’m not sure how to go about justifying like, a conversation style. I understand where you’re coming from. Personally I don’t find the way of engaging that you’re looking for to be fun or productive. You want me to drag you every step of the way.. That’s what it feels like, anyway. I can’t be the one to do that. I only have time to give a summary.
If that’s not interesting to people, if I haven’t motivated a deeper exploration or if it’s not evident to enough people that this would be a useful framework for discussion, well, okay. Maybe I have alienated people who know this area well enoug to confirm or refute, or maybe this isn’t the right medium for that.
While I think you are not wrong about the entanglement of intellectual exploration and truth-value, I do think you did not really explain the intent of the post. You only really said half a sentence about it, and that one did seem pretty weird to me:
This seems to indicate that your goal is to get the people on this site to start working towards the future you described, which seems like a valid and fine goal. However, at this point I am not particularly convinced that working on the future you described is tractable, something I can influence particularly much, or something I should care about. It sure sounds pretty cool, but there are a lot of visions for the future that sound pretty cool.
Don’t have time?! You typed a way longer comment than you would have needed to type if you had just answered my questions!
You really didn’t explain it, though. You said “you might be misreading the intent of the post”… and then didn’t follow that up with a statement of what the intent of the post was.
Again, I’m asking very simple questions. You seem to be avoiding answering them. I’m not sure why. It seems like it would take you very little effort to do so, much less effort than making the comments you are making.
Why do you think that scenario planning exercises aren’t worthy to be discussed?
What?
Scenario planning is a common way to think about the future. It’s not about arguing that a specific future has a high probability and will or should happen.