My take is: you shouldn’t expect to get everything right when you try to reason about a moderately complicated system abstractly, no matter how smart you are. You’d like to have a lot of practice so that you can do your best, can get a sense for what kinds of things you tend to miss and how they change the bottom line, can better understand what the returns to thinking are typically like, and so on. This was a fun and unusually self-contained example, where we happened to miss an important and very clean consideration that can be appreciated with very little domain knowledge. (I think realistic cases are usually much more of a mess.)
In this case, I feel pretty confident that I would have noticed this consideration if I thought about the question for a few hours (and probably less), and I think that it would become obvious if you tried to write out your reasoning sufficiently carefully. But even if I spend hundreds of hours thinking about some issue with AI, I expect to miss all kinds of important and obvious-in-retrospect considerations in a roughly analogous way. (This is related to my view that verification is easier than generation.)
I don’t think that means we shouldn’t try to figure things out by thinking about them. Thinking about what’s going on is an important part of how to get to correct answers quickly and an important complement of empirical data (you need to think when empirical data is hard to come by, to help interpret history and the results of experiment, to prioritize experimentation, etc.).
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
Of course, I think even if you successfully notice every on-paper consideration, there are still likely to be messy facts about the real world that you either didn’t know or obviously had no hope of capturing in a model that’s simple enough to reason about. That said, I think that reasoning in practice is basically never purely in this regime (and if you do literally get to this regime for a question, in some sense you’ve probably spent too long thinking about the question relative to doing something else), so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
I agree. The term I’ve heard to describe this state is “violent agreement”.
so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
The only thing I was trying to point out (maybe more so for everyone else reading the commentary than for you specifically) is that it is perfectly rational for an actor to “not think hard enough” about some problem and thus arrive at a wrong conclusion (or correct conclusion but for a wrong reason), because that actor has higher priority items requiring their attention, and that puts hard time constraints on how many cycles they can dedicate to lower priority items, e.g. debating AC efficiency. Rational actors will try to minimize the likelihood that they’ve reached a wrong conclusion, but they’ll also be forced to minimize or at least not exceed some limit on allowed computation cycles, and on most problems that means the computation cost + any type of hard time constraint is going to be the actual limiting factor.
Although even that, I think that’s more or less what you meant by
in some sense you’ve probably spent too long thinking about the question relative to doing something else
In engineering R&D we often do a bunch of upfront thinking at the start of a project, and the goal is to identify where we have uncertainty or risk in our proposed design. Then, rather than spend 2 more months in meetings debating back-and-forth who has done the napkin math correctly, we’ll take the things we’re uncertain about and design prototypes to burn down risk directly.
My take is: you shouldn’t expect to get everything right when you try to reason about a moderately complicated system abstractly, no matter how smart you are. You’d like to have a lot of practice so that you can do your best, can get a sense for what kinds of things you tend to miss and how they change the bottom line, can better understand what the returns to thinking are typically like, and so on. This was a fun and unusually self-contained example, where we happened to miss an important and very clean consideration that can be appreciated with very little domain knowledge. (I think realistic cases are usually much more of a mess.)
In this case, I feel pretty confident that I would have noticed this consideration if I thought about the question for a few hours (and probably less), and I think that it would become obvious if you tried to write out your reasoning sufficiently carefully. But even if I spend hundreds of hours thinking about some issue with AI, I expect to miss all kinds of important and obvious-in-retrospect considerations in a roughly analogous way. (This is related to my view that verification is easier than generation.)
I don’t think that means we shouldn’t try to figure things out by thinking about them. Thinking about what’s going on is an important part of how to get to correct answers quickly and an important complement of empirical data (you need to think when empirical data is hard to come by, to help interpret history and the results of experiment, to prioritize experimentation, etc.).
I’m not sure if your comment is disagreeing with any of this. It sounds like we’re on the same page about the fact that exact reasoning is prohibitively costly, and so you will be reasoning approximately, will often miss things, etc.
Of course, I think even if you successfully notice every on-paper consideration, there are still likely to be messy facts about the real world that you either didn’t know or obviously had no hope of capturing in a model that’s simple enough to reason about. That said, I think that reasoning in practice is basically never purely in this regime (and if you do literally get to this regime for a question, in some sense you’ve probably spent too long thinking about the question relative to doing something else), so in practice wrong conclusions are almost always due to a combination of both “not knowing enough” and “not thinking hard enough” / “not being smart enough.”
I agree. The term I’ve heard to describe this state is “violent agreement”.
The only thing I was trying to point out (maybe more so for everyone else reading the commentary than for you specifically) is that it is perfectly rational for an actor to “not think hard enough” about some problem and thus arrive at a wrong conclusion (or correct conclusion but for a wrong reason), because that actor has higher priority items requiring their attention, and that puts hard time constraints on how many cycles they can dedicate to lower priority items, e.g. debating AC efficiency. Rational actors will try to minimize the likelihood that they’ve reached a wrong conclusion, but they’ll also be forced to minimize or at least not exceed some limit on allowed computation cycles, and on most problems that means the computation cost + any type of hard time constraint is going to be the actual limiting factor.
Although even that, I think that’s more or less what you meant by
In engineering R&D we often do a bunch of upfront thinking at the start of a project, and the goal is to identify where we have uncertainty or risk in our proposed design. Then, rather than spend 2 more months in meetings debating back-and-forth who has done the napkin math correctly, we’ll take the things we’re uncertain about and design prototypes to burn down risk directly.