Yeah, actually, I think your counterargument is correct. I basically had a cached thought that Block was trying to do with Blockhead a similar thing to what Searle was trying to do with the Chinese Room. Should have checked it more carefully.
I’ve now edited to remove my critique of Block himself, while still keeping the argument that Blockhead is uninformative about AI for (some of) the same reasons that bayesianism is.
Yeah, actually, I think your counterargument is correct. I basically had a cached thought that Block was trying to do with Blockhead a similar thing to what Searle was trying to do with the Chinese Room. Should have checked it more carefully.
I’ve now edited to remove my critique of Block himself, while still keeping the argument that Blockhead is uninformative about AI for (some of) the same reasons that bayesianism is.