I also think he objects to putting numbers on things, and I also avoid doing it. A concrete example: I explicitly avoid putting numbers on things in LessWrong posts. The reason is straightforward—if a number appears anywhere in the post, about half of the conversation in the comments will be on that number to the exclusion of the point of the post (or the lack of one, etc). So unless numbers are indeed the thing you want to be talking about, in the sense of detailed results of specific computations, they are positively distracting from the rest of the post for the audience.
I focused on the communication aspect in my response, but I should probably also say that I don’t really track what the number is when I actually go to the trouble of computing a prior, personally. The point of generating the number is clarifying the qualitative information, and then the point remains the qualitative information after I got the number; I only really start paying attention to what the number is if it stays consistent enough after doing the generate-a-number move that I recognize it as being basically the same as the last few times. Even then, I am spending most of my effort on the qualitative level directly.
I make an analogy to computer programs: the sheer fact of successfully producing an output without errors weighs much more than whatever the value of the output is. The program remains our central concern, and continuing to improve it using known patterns and good practices for writing code is usually the most effective method. Taking the programming analogy one layer further, there’s a significant chunk of time where you can be extremely confident the output is meaningless; suppose you haven’t even completed what you already know to be minimum requirements, and compile the program anyway, just to test for errors so far. There’s no point in running the program all the way to an output, because you know it would be meaningless. In the programming analogy, a focus on the value of the output is a kind of “premature optimization is the root of all evil” problem.
I do think this probably reflects the fact that Eliezer’s time is mostly spent on poorly understood problems like AI, rather than on stable well-understood domains where working with numbers is a much more reasonable prospect. But it still feels like even in the case where I am trying to learn something that is well-understood, just not by me, trying for a number feels opposite the idea of hugging the query, somehow. Or in virtue language: how does the number cut the enemy?
I also think he objects to putting numbers on things, and I also avoid doing it. A concrete example: I explicitly avoid putting numbers on things in LessWrong posts. The reason is straightforward—if a number appears anywhere in the post, about half of the conversation in the comments will be on that number to the exclusion of the point of the post (or the lack of one, etc). So unless numbers are indeed the thing you want to be talking about, in the sense of detailed results of specific computations, they are positively distracting from the rest of the post for the audience.
I focused on the communication aspect in my response, but I should probably also say that I don’t really track what the number is when I actually go to the trouble of computing a prior, personally. The point of generating the number is clarifying the qualitative information, and then the point remains the qualitative information after I got the number; I only really start paying attention to what the number is if it stays consistent enough after doing the generate-a-number move that I recognize it as being basically the same as the last few times. Even then, I am spending most of my effort on the qualitative level directly.
I make an analogy to computer programs: the sheer fact of successfully producing an output without errors weighs much more than whatever the value of the output is. The program remains our central concern, and continuing to improve it using known patterns and good practices for writing code is usually the most effective method. Taking the programming analogy one layer further, there’s a significant chunk of time where you can be extremely confident the output is meaningless; suppose you haven’t even completed what you already know to be minimum requirements, and compile the program anyway, just to test for errors so far. There’s no point in running the program all the way to an output, because you know it would be meaningless. In the programming analogy, a focus on the value of the output is a kind of “premature optimization is the root of all evil” problem.
I do think this probably reflects the fact that Eliezer’s time is mostly spent on poorly understood problems like AI, rather than on stable well-understood domains where working with numbers is a much more reasonable prospect. But it still feels like even in the case where I am trying to learn something that is well-understood, just not by me, trying for a number feels opposite the idea of hugging the query, somehow. Or in virtue language: how does the number cut the enemy?