I don’t think that the MIRI book would hold up if you analyzed it with this level of persnicketiness–they were absolutely not precise at the level of distinguishing between the whole development process and single training runs. (Which is arguably fine–they were trying to write a popular book, not trying to persuade super high-context readers of anything!) So this complaint strikes me as somewhat of an isolated demand for rigor.
I’m not trying to debate or gotcha. I agree that if I tried to do adversarial nitpicking at IABIED I could make it sound equally bad. I found Will’s review convincing, in the sense that it intuitively snapped me into the worldview where the evolutionary analogy isn’t a good argument. I spent the day thinking about it, and I wrote out my own steelman of it that extrapolated details, and re-evaluated whether I thought the original argument was valid, and decided that yeah it still was. This exercise was partially motivated by you saying that your complaints were similar in another comment.
Then I went through and found the important differences between my steelman-will-beliefs and my actual beliefs, the places where I thought it was locally making a mistake, and wrote them down, and then turned that into this shortform. I framed it as misrepresenting after re-reading chapter 4 to check how my argument matched up. Maybe this was a bad way to write it up. It definitely feels like he’s doing the opposite of steelmanning, not particularly trying to convey a good version of the argument in the book, or understand the coherent worldview that produced it.
But it’s an honest guess that this is a thing Will is missing (how the evolution analogy should be scoped, and how the other premises are separate from it and also necessary). The guess was constructed without knowing Will or reading much of his other writing, so I admit it’s pretty likely to be wrong, but if so maybe someone will explain how.
But either way, I figured it’s particularly worth publishing this particular part of the things I wrote today because of how often I hear people misunderstand the evolution analogy.
I feel like your title for this short-form post is unreasonably aggressive, given what you’re saying here.
I found your articulation of the structure of the book’s argument helpful and clarifying.
I’m planning to write something more about this at some point: I think a key issue here is that we aren’t making the kind of arguments where “local validity” is a reliable concept. No-one is trying to make proofs, they’re trying to make defeasible heuristic arguments. Suppose the book makes an argument of the form “Because of argument A, I believe conclusion X. You might have thought that B is a counterargument to A. But actually, because of argument C, B doesn’t work.” If Will thinks that argument C doesn’t work, I think it’s fine for him to summarize this as: “they make an argument mostly around A, and which I don’t think suffices to establish X”.
I don’t think that the MIRI book would hold up if you analyzed it with this level of persnicketiness–they were absolutely not precise at the level of distinguishing between the whole development process and single training runs. (Which is arguably fine–they were trying to write a popular book, not trying to persuade super high-context readers of anything!) So this complaint strikes me as somewhat of an isolated demand for rigor.
I’m not trying to debate or gotcha. I agree that if I tried to do adversarial nitpicking at IABIED I could make it sound equally bad. I found Will’s review convincing, in the sense that it intuitively snapped me into the worldview where the evolutionary analogy isn’t a good argument. I spent the day thinking about it, and I wrote out my own steelman of it that extrapolated details, and re-evaluated whether I thought the original argument was valid, and decided that yeah it still was. This exercise was partially motivated by you saying that your complaints were similar in another comment.
Then I went through and found the important differences between my steelman-will-beliefs and my actual beliefs, the places where I thought it was locally making a mistake, and wrote them down, and then turned that into this shortform. I framed it as misrepresenting after re-reading chapter 4 to check how my argument matched up. Maybe this was a bad way to write it up. It definitely feels like he’s doing the opposite of steelmanning, not particularly trying to convey a good version of the argument in the book, or understand the coherent worldview that produced it.
But it’s an honest guess that this is a thing Will is missing (how the evolution analogy should be scoped, and how the other premises are separate from it and also necessary). The guess was constructed without knowing Will or reading much of his other writing, so I admit it’s pretty likely to be wrong, but if so maybe someone will explain how.
But either way, I figured it’s particularly worth publishing this particular part of the things I wrote today because of how often I hear people misunderstand the evolution analogy.
I feel like your title for this short-form post is unreasonably aggressive, given what you’re saying here.
I found your articulation of the structure of the book’s argument helpful and clarifying.
I’m planning to write something more about this at some point: I think a key issue here is that we aren’t making the kind of arguments where “local validity” is a reliable concept. No-one is trying to make proofs, they’re trying to make defeasible heuristic arguments. Suppose the book makes an argument of the form “Because of argument A, I believe conclusion X. You might have thought that B is a counterargument to A. But actually, because of argument C, B doesn’t work.” If Will thinks that argument C doesn’t work, I think it’s fine for him to summarize this as: “they make an argument mostly around A, and which I don’t think suffices to establish X”.
You’re right, I edited it.
That makes sense about local validity.