Feels like there’s some kind of frame-error here, like you’re complaining that the move in question isn’t using a particular interface, but the move isn’t intended to use that interface in the first place? Can’t quite put my finger on it, but I’ll try to gesture in the right direction.
Consider ye olde philosophers who liked to throw around syllogisms. You and I can look at many of those syllogisms and be like “that’s cute and clever and does not bind to reality at all, that’s not how real-thinking works”. But if we’d been around at the time, very plausibly we would not be able to recognize the failure; maybe we would not have been able to predict in advance that many of the philosophers’ clever syllogisms totally fail to bind to reality.
Nonetheless, it is still useful and instructive to look at those syllogisms and say “look, these things obviously-in-some-sense do not bind to reality, they are not real-thinking, and therefore they are strong evidence that there is something systematically wrong with the thinking-methods of those philosophers”. (Eliezer would probably reflexively follow that up with “so I should figure out what systematic thinking errors plagued those seemingly-bright philosophers, and caused them to deceive themselves with syllogisms, in order to avoid those errors myself”.)
And if there’s some modern-day philosopher standing nearby saying that in fact syllogisms totally do bind to reality… then yeah, this whole move isn’t really a response to them. That’s not really what it’s intended for. But even if one’s goal is to respond to that philosopher, it’s probably still a useful first step to figure out what systematic thinking error causes them to not notice that many of their syllogisms totally fail to bind to reality.
So I guess maybe… Eliezer’s imagined audience here is someone who has already noticed that bio anchors and the Carlsmith thing fail to bind to reality, but you’re criticizing it for not instead responding to a hypothetical audience who thinks that the reports maybe do bind to reality?
So I guess maybe… Eliezer’s imagined audience here is someone who has already noticed that bio anchors and the Carlsmith thing fail to bind to reality, but you’re criticizing it for not instead responding to a hypothetical audience who thinks that the reports maybe do bind to reality?
I almost added a sentence at the end of my comment to the effect of…
“Either someone did that X was blindly obvious, in which case they don’t need to be told, or it wasn’t blindingly obvious to them, and they should should pay attention to the correct prediction, and ignore the assertion that it was obvious. In either case...the statement isn’t doing anything?”
Who are statements like these for? Is it for the people who thought that things were obvious to find and identify each other?
To gesture at a concern I have (which I think is probably orthogonal to what you’re pointing at):
On a first pass, the only people who might be influenced by statements like that are being influenced epistemically illegitimately.
Like, I’m imagining a person, Bob, who heard all the arguments at the time and did not feel confident enough to make a specific prediction. But then we all get to wait a few years and see how (some of the questions, though not most of them) actually played out, and then Eliezer or whoever says “not only was I right, it was blindingly obvious that I was right, and we all should have known all along!”
This is in practice received by Bob as almost an invitation to rewrite history and hindsight bias about what happened. It’s very natural to agree with Eliezer (or whoever) that, “yeah, it was obvious all along.” [1]
And that’s really sus! Bob didn’t get new information or think of new considerations that caused the confusing question to go from confusing to obvious. He just learned the answer!
He should be reminding himself that he didn’t in fact make an advance prediction, and remembering that at the time, it seemed like a confusing hard-to-call question, and analyzing what kinds of general thinking patterns would have allowed him to correctly call this one in advance.
I think when Eliezer gets irate and at people for what he considers their cognitive distortions:
It doesn’t convince the people he’s ostensibly arguing against, because those people don’t share his premises. They often disagree with him, on the object level, about whether the specific conclusions under discussion have been falsified.
(eg Ryan saying he doesn’t think bio ancors was unreasonable, in this thread, or Paul disagreeing with Eliezer claims ~”that people like Paul are surprised by how the world actually plays out.”)
It doesn’t convince the tiny number of people who could see for themselves that those ways of thinking were blindingly obvious (and/or have a shared error pattern with Eliezer, that cause them to be making the same mistake).
(eg John Wentworth)
It does sweep up some social-ideologically doomer-y people into feeling more confidence for their doomerism and related beliefs, both by social proof (Eliezer is so confident and assertive, which makes me feel more comfortable asserting high P(doom)s), and because Eliezer’s setting a frame in which he’s right, and people doing Real Thinking(TM), can see that he’s right, and anyone who doesn’t get it is blinded by frustrating biases.
(eg “Bob”, though I’m thinking of a few specific people.)
It alienates a bunch of onlookers, both people who think that Eliezer is wrong / making a mistake, and people who are agnostic.
In all cases, this seems either unproductive or counterproductive.
Like, there’s some extra psychological omph of just how right Eliezer (or whoever) was and how wrong the other parties were. You get to be on the side of the people who were right all along, against the oppressive forces of OpenPhil’s powerful distortionary forces / the power of modest epistemology / whatever. There’s some story that the irateness invites onlookers like Bob to participate in.
Ok, I think one of the biggest disconnects here is that Eliezer is currently talking in hindsight about what we should learn from past events, and this is and should often be different from what most people could have learned at the time. Again, consider the syllogism example: just because you or I might have been fooled by it at the time does not mean we can’t learn from the obvious-in-some-sense foolishness after the fact. The relevant kind of “obviousness” needs to include obviousness in hindsight for the move Eliezer is making to work, not necessarily obviousness in advance, though it does also need to “obvious” in advance in a different sense (more on that below).
Short handle: “It seems obvious in hindsight that <X> was foolish (not merely a sensible-but-incorrect prediction from insufficient data); why wasn’t that obvious at the time, and what pattern do I need to be on the watch for to make it obvious in the future?”
Eliezer’s application of that pattern to the case at hand goes:
It seems obvious-in-some-sense in hindsight that bio anchors and the Carlsmith thing were foolish, i.e. one can read them and go “man this does seem kind of silly”.
Insofar as that wasn’t obvious at the time, it’s largely because people were selecting for moderate-sounding conclusions. (That’s not the only generalizable pattern which played a role here, but it’s an important one.)
So in the future, I should be on the lookout for the pattern of selecting for moderate-sounding conclusions.
I think an important gear here is that things can be obvious-in-hindsight, but not in advance, in a way which isn’t really a Bayesian update on new evidence and therefore doesn’t strictly follow prediction rules.
Toy example:
Someone publishes a proof of a mathematical conjecture, which enters canon as a theorem.
Some years later, another person stumbles on a counterexample.
Surprised mathematicians go back over the old proof, and indeed find a load-bearing error. Turns out the proof was wrong!
The key point here is that the error was an error of reasoning, not an error of insufficient evidence or anything like that. The error was “obvious” in some sense in advance; a mathematician who’d squinted at the right part of the proof could have spotted it. Yet in practice, it was discovered by evidence arriving, rather than by someone squinting at the proof.
Note that this toy example is exactly the sort where the right primary move to make afterwards is to say “the error is obvious in hindsight, and was obvious-in-some-sense beforehand, even if nobody noticed it. Why the failure, and how do we avoid that in the future?”.
This is very much the thing Eliezer is doing here. He’s (he claims) pointing to a failure of reasoning, not of insufficient evidence. For many people, the arrival of more recent evidence has probably made it more obvious that there was a reasoning failure, and those people are the audience who (hopefully) get value from the move Eliezer made—hopefully they will be able to spot such silly patterns better in the future.
Feels like there’s some kind of frame-error here, like you’re complaining that the move in question isn’t using a particular interface, but the move isn’t intended to use that interface in the first place? Can’t quite put my finger on it, but I’ll try to gesture in the right direction.
Consider ye olde philosophers who liked to throw around syllogisms. You and I can look at many of those syllogisms and be like “that’s cute and clever and does not bind to reality at all, that’s not how real-thinking works”. But if we’d been around at the time, very plausibly we would not be able to recognize the failure; maybe we would not have been able to predict in advance that many of the philosophers’ clever syllogisms totally fail to bind to reality.
Nonetheless, it is still useful and instructive to look at those syllogisms and say “look, these things obviously-in-some-sense do not bind to reality, they are not real-thinking, and therefore they are strong evidence that there is something systematically wrong with the thinking-methods of those philosophers”. (Eliezer would probably reflexively follow that up with “so I should figure out what systematic thinking errors plagued those seemingly-bright philosophers, and caused them to deceive themselves with syllogisms, in order to avoid those errors myself”.)
And if there’s some modern-day philosopher standing nearby saying that in fact syllogisms totally do bind to reality… then yeah, this whole move isn’t really a response to them. That’s not really what it’s intended for. But even if one’s goal is to respond to that philosopher, it’s probably still a useful first step to figure out what systematic thinking error causes them to not notice that many of their syllogisms totally fail to bind to reality.
So I guess maybe… Eliezer’s imagined audience here is someone who has already noticed that bio anchors and the Carlsmith thing fail to bind to reality, but you’re criticizing it for not instead responding to a hypothetical audience who thinks that the reports maybe do bind to reality?
I almost added a sentence at the end of my comment to the effect of…
“Either someone did that X was blindly obvious, in which case they don’t need to be told, or it wasn’t blindingly obvious to them, and they should should pay attention to the correct prediction, and ignore the assertion that it was obvious. In either case...the statement isn’t doing anything?”
Who are statements like these for? Is it for the people who thought that things were obvious to find and identify each other?
To gesture at a concern I have (which I think is probably orthogonal to what you’re pointing at):
On a first pass, the only people who might be influenced by statements like that are being influenced epistemically illegitimately.
Like, I’m imagining a person, Bob, who heard all the arguments at the time and did not feel confident enough to make a specific prediction. But then we all get to wait a few years and see how (some of the questions, though not most of them) actually played out, and then Eliezer or whoever says “not only was I right, it was blindingly obvious that I was right, and we all should have known all along!”
This is in practice received by Bob as almost an invitation to rewrite history and hindsight bias about what happened. It’s very natural to agree with Eliezer (or whoever) that, “yeah, it was obvious all along.” [1]
And that’s really sus! Bob didn’t get new information or think of new considerations that caused the confusing question to go from confusing to obvious. He just learned the answer!
He should be reminding himself that he didn’t in fact make an advance prediction, and remembering that at the time, it seemed like a confusing hard-to-call question, and analyzing what kinds of general thinking patterns would have allowed him to correctly call this one in advance.
I think when Eliezer gets irate and at people for what he considers their cognitive distortions:
It doesn’t convince the people he’s ostensibly arguing against, because those people don’t share his premises. They often disagree with him, on the object level, about whether the specific conclusions under discussion have been falsified.
(eg Ryan saying he doesn’t think bio ancors was unreasonable, in this thread, or Paul disagreeing with Eliezer claims ~”that people like Paul are surprised by how the world actually plays out.”)
It doesn’t convince the tiny number of people who could see for themselves that those ways of thinking were blindingly obvious (and/or have a shared error pattern with Eliezer, that cause them to be making the same mistake).
(eg John Wentworth)
It does sweep up some social-ideologically doomer-y people into feeling more confidence for their doomerism and related beliefs, both by social proof (Eliezer is so confident and assertive, which makes me feel more comfortable asserting high P(doom)s), and because Eliezer’s setting a frame in which he’s right, and people doing Real Thinking(TM), can see that he’s right, and anyone who doesn’t get it is blinded by frustrating biases.
(eg “Bob”, though I’m thinking of a few specific people.)
It alienates a bunch of onlookers, both people who think that Eliezer is wrong / making a mistake, and people who are agnostic.
In all cases, this seems either unproductive or counterproductive.
Like, there’s some extra psychological omph of just how right Eliezer (or whoever) was and how wrong the other parties were. You get to be on the side of the people who were right all along, against the oppressive forces of OpenPhil’s powerful distortionary forces / the power of modest epistemology / whatever. There’s some story that the irateness invites onlookers like Bob to participate in.
Ok, I think one of the biggest disconnects here is that Eliezer is currently talking in hindsight about what we should learn from past events, and this is and should often be different from what most people could have learned at the time. Again, consider the syllogism example: just because you or I might have been fooled by it at the time does not mean we can’t learn from the obvious-in-some-sense foolishness after the fact. The relevant kind of “obviousness” needs to include obviousness in hindsight for the move Eliezer is making to work, not necessarily obviousness in advance, though it does also need to “obvious” in advance in a different sense (more on that below).
Short handle: “It seems obvious in hindsight that <X> was foolish (not merely a sensible-but-incorrect prediction from insufficient data); why wasn’t that obvious at the time, and what pattern do I need to be on the watch for to make it obvious in the future?”
Eliezer’s application of that pattern to the case at hand goes:
It seems obvious-in-some-sense in hindsight that bio anchors and the Carlsmith thing were foolish, i.e. one can read them and go “man this does seem kind of silly”.
Insofar as that wasn’t obvious at the time, it’s largely because people were selecting for moderate-sounding conclusions. (That’s not the only generalizable pattern which played a role here, but it’s an important one.)
So in the future, I should be on the lookout for the pattern of selecting for moderate-sounding conclusions.
I think an important gear here is that things can be obvious-in-hindsight, but not in advance, in a way which isn’t really a Bayesian update on new evidence and therefore doesn’t strictly follow prediction rules.
Toy example:
Someone publishes a proof of a mathematical conjecture, which enters canon as a theorem.
Some years later, another person stumbles on a counterexample.
Surprised mathematicians go back over the old proof, and indeed find a load-bearing error. Turns out the proof was wrong!
The key point here is that the error was an error of reasoning, not an error of insufficient evidence or anything like that. The error was “obvious” in some sense in advance; a mathematician who’d squinted at the right part of the proof could have spotted it. Yet in practice, it was discovered by evidence arriving, rather than by someone squinting at the proof.
Note that this toy example is exactly the sort where the right primary move to make afterwards is to say “the error is obvious in hindsight, and was obvious-in-some-sense beforehand, even if nobody noticed it. Why the failure, and how do we avoid that in the future?”.
This is very much the thing Eliezer is doing here. He’s (he claims) pointing to a failure of reasoning, not of insufficient evidence. For many people, the arrival of more recent evidence has probably made it more obvious that there was a reasoning failure, and those people are the audience who (hopefully) get value from the move Eliezer made—hopefully they will be able to spot such silly patterns better in the future.