It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don’t know anything from the field. It suffices to say that “bottom-up predictability” applied to the mind implies that we can build a machine to do the things which the mind does. The difficulty of doing so has a strict upper bound in the difficulty of building an organic brain from scratch, and is very probably easier than that (if any special physical properties are involved, they can very likely be duplicated by something much easier to build). Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can’t do what the brain does is necessarily wrong (although you might need something that isn’t a digital computer). Anything past that is an empirical technological issue which is not really in the realm of philosophy at all, but rather of computer scientists and physicists.
The sections on Godel’s theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can’t do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can’t apply. It feels like you just keep saying the same things over and over in the paper; by the end I was wondering what the point was. Certainly I didn’t feel like your title tied into the paper very well at all, and there wasn’t a strong thesis that stood out to me (“Given that the brain is a physical system, and physics is consistent (the same laws govern machines we find in nature, like the human brain, and those we build ourselves), then it must be possible in principle to build machines which can do any and all jobs that a human can do.”) My proposed thesis is stronger than yours, mostly because machines have already taken many jobs (factory assembly lines) and in fact machines are already able to perform mathematical proofs (look up Computer Assisted Proofs -they can solve problems that humans can’t due to speed requirements). I also use “machines” instead of AI in order to avoid questions of intelligence or the like—the Chinese Room might produce great works of literature, even if it is believed not to be intelligent, and that literature could be worth money.
Don’t take this as an attack or anything, but rather as criticism that you can use to strengthen your paper. There’s a good point here, I just think it needs be brought out and given the spotlight. The basic point is not complex, and the only thing you need in order to support it is an argument that the laws of physics don’t treat things we build differently just because we built them (there’s probably some literature here for an Appeal to Authority if you feel you need one; otherwise a simple argument from Occam’s Razor plus the failure to observe this being the case in anything so far is sufficient). You might want an argument for machines that are not physically identical to humans, but you’ll lose some of your non-reductionist audience (maybe hyper computation is possible for humans but nothing else). Such an argument can be achieved through Turing-complete simulation, or in the case of hyper computation the observation that it should be probably possible to build something that isn’t a brain but uses the same special physics.
It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don’t know anything from the field.
You may be right about this, though I also want to be cautious because of illusion of transparency issues.
It suffices to say that “bottom-up predictability” applied to the mind implies that we can build a machine to do the things which the mind does.
What I want to claim is somewhat stronger than that; notably there’s the question of whether [i]the general types of machines we already know how to build[/i] can do the things the human mind does. That might not be true if, e.g., you believe in physical hypercomputation (which I don’t, but it’s the kind of thing you want to address if you want to satisfy stubborn philosophers that you’ve dealt with as wide a range of possible objections as possible).
Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can’t do what the brain does is necessarily wrong (although you might need something that isn’t a digital computer).
Again, it would be nice if it were that simple, but there are people who insist they’ll have nothing to do with dualism but who advance the idea that computers can’t do what the brain does, and they don’t accept that argument.
The sections on Godel’s theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can’t do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can’t apply.
Again, slightly more complicated than this. Penrose, Proudfoot, Copeland, and others who see AI as somehow philosophically or conceptually problematic often present themselves as accepting that the mind is physical.
Your comment makes me think I need to be clearer about who my opponents are—namely, people who say they accept the mind is physical but claim AI is philosophically or conceptually problematic. Does that sound right to you?
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.
It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don’t know anything from the field. It suffices to say that “bottom-up predictability” applied to the mind implies that we can build a machine to do the things which the mind does. The difficulty of doing so has a strict upper bound in the difficulty of building an organic brain from scratch, and is very probably easier than that (if any special physical properties are involved, they can very likely be duplicated by something much easier to build). Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can’t do what the brain does is necessarily wrong (although you might need something that isn’t a digital computer). Anything past that is an empirical technological issue which is not really in the realm of philosophy at all, but rather of computer scientists and physicists.
The sections on Godel’s theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can’t do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can’t apply. It feels like you just keep saying the same things over and over in the paper; by the end I was wondering what the point was. Certainly I didn’t feel like your title tied into the paper very well at all, and there wasn’t a strong thesis that stood out to me (“Given that the brain is a physical system, and physics is consistent (the same laws govern machines we find in nature, like the human brain, and those we build ourselves), then it must be possible in principle to build machines which can do any and all jobs that a human can do.”) My proposed thesis is stronger than yours, mostly because machines have already taken many jobs (factory assembly lines) and in fact machines are already able to perform mathematical proofs (look up Computer Assisted Proofs -they can solve problems that humans can’t due to speed requirements). I also use “machines” instead of AI in order to avoid questions of intelligence or the like—the Chinese Room might produce great works of literature, even if it is believed not to be intelligent, and that literature could be worth money.
Don’t take this as an attack or anything, but rather as criticism that you can use to strengthen your paper. There’s a good point here, I just think it needs be brought out and given the spotlight. The basic point is not complex, and the only thing you need in order to support it is an argument that the laws of physics don’t treat things we build differently just because we built them (there’s probably some literature here for an Appeal to Authority if you feel you need one; otherwise a simple argument from Occam’s Razor plus the failure to observe this being the case in anything so far is sufficient). You might want an argument for machines that are not physically identical to humans, but you’ll lose some of your non-reductionist audience (maybe hyper computation is possible for humans but nothing else). Such an argument can be achieved through Turing-complete simulation, or in the case of hyper computation the observation that it should be probably possible to build something that isn’t a brain but uses the same special physics.
Thank you for the detailed commentary.
You may be right about this, though I also want to be cautious because of illusion of transparency issues.
What I want to claim is somewhat stronger than that; notably there’s the question of whether [i]the general types of machines we already know how to build[/i] can do the things the human mind does. That might not be true if, e.g., you believe in physical hypercomputation (which I don’t, but it’s the kind of thing you want to address if you want to satisfy stubborn philosophers that you’ve dealt with as wide a range of possible objections as possible).
Again, it would be nice if it were that simple, but there are people who insist they’ll have nothing to do with dualism but who advance the idea that computers can’t do what the brain does, and they don’t accept that argument.
Again, slightly more complicated than this. Penrose, Proudfoot, Copeland, and others who see AI as somehow philosophically or conceptually problematic often present themselves as accepting that the mind is physical.
Your comment makes me think I need to be clearer about who my opponents are—namely, people who say they accept the mind is physical but claim AI is philosophically or conceptually problematic. Does that sound right to you?
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I’m not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don’t see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam’s Raxor, but then this is philosophy, eh?). If you’re neither an anti-reductionist nor a dualist then there’s no way to make the claim, and there are better arguments against the people who are. I don’t really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you’re proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don’t know how to build those yet, but if it somehow was then we’d presumably figure out how that part worked.