I find it so interesting how often this kind of thing keeps happening, and I can’t tell how often it’s honest mistakes, versus lack of interest in the core questions, or willful self-delusion. Or maybe they’re right, but not noticing that they actually are proving (and exemplifying) that humans also lack generalizable reasoning capability.
My mental analogy for the Tower of Hanoi case: Imagine I went to a secretarial job interview, and they said they were going to test my skills by making me copy a text by hand with zero mistakes. Harsh, odd, but comprehensible. If they then said, “Here’s a copy of Crime and Punishment, along with a pencil and five sheets of loose leaf paper. You have 15 minutes,” then the result does not mean what they claim it means. Even if they gave me plenty of paper and time, but I said, “#^&* you, I’m leaving,” or, “No, I’ll copy page one, and prove if I can copy page n it means I can copy page n+1, so that I have demonstrated the necessary skill, proof by induction,” then the result still does not mean what they claim it means.
I do really appreciate the chutzpah of talking about children solving Towers of Hanoi. Yes, given an actual toy and enough time to play, they often can solve the puzzle. Given only a pen, paper, verbal description, of the problem, and demand for an error-free specified-format written procedure for producing the solution, not so much. These are not the same task. There’s a reason many of us had to write elementary school essays on things like precisely laying out all the steps required to make a sandwich: this ability is a dimension along which humans greatly vary. If that’s not compelling, think of all the examples of badly-written instruction manuals you’ve come across in your life.
I also appreciate the chutzpah of that ‘goalpost shifting’ tweet. If just kind of assumes away any possibility that there is a difference between AGI and ASI, and in the process inadvertently implies a claim that humans also lack reasoning capability? And spreadsheets do crash and freeze up when you put more rows and columns of data in them than your system can handle—I’d much rather they be aware of this limit and warn me I was about to trigger a crash.
I find it so interesting how often this kind of thing keeps happening, and I can’t tell how often it’s honest mistakes, versus lack of interest in the core questions, or willful self-delusion. Or maybe they’re right, but not noticing that they actually are proving (and exemplifying) that humans also lack generalizable reasoning capability.
My mental analogy for the Tower of Hanoi case: Imagine I went to a secretarial job interview, and they said they were going to test my skills by making me copy a text by hand with zero mistakes. Harsh, odd, but comprehensible. If they then said, “Here’s a copy of Crime and Punishment, along with a pencil and five sheets of loose leaf paper. You have 15 minutes,” then the result does not mean what they claim it means. Even if they gave me plenty of paper and time, but I said, “#^&* you, I’m leaving,” or, “No, I’ll copy page one, and prove if I can copy page n it means I can copy page n+1, so that I have demonstrated the necessary skill, proof by induction,” then the result still does not mean what they claim it means.
I do really appreciate the chutzpah of talking about children solving Towers of Hanoi. Yes, given an actual toy and enough time to play, they often can solve the puzzle. Given only a pen, paper, verbal description, of the problem, and demand for an error-free specified-format written procedure for producing the solution, not so much. These are not the same task. There’s a reason many of us had to write elementary school essays on things like precisely laying out all the steps required to make a sandwich: this ability is a dimension along which humans greatly vary. If that’s not compelling, think of all the examples of badly-written instruction manuals you’ve come across in your life.
I also appreciate the chutzpah of that ‘goalpost shifting’ tweet. If just kind of assumes away any possibility that there is a difference between AGI and ASI, and in the process inadvertently implies a claim that humans also lack reasoning capability? And spreadsheets do crash and freeze up when you put more rows and columns of data in them than your system can handle—I’d much rather they be aware of this limit and warn me I was about to trigger a crash.