I’m slightly disappointed, because I thought you were going to write a long post on philosophy of science applied to AI. But this one is still a pretty good post.
About the content, I completely agree with you. Even more, I think there is a role of philosophy of science that you don’t mention, and that is actually important in AI: grounding the kind of knowledge produced by computer science.
I mean, you’re using the word science here to talk about AI, but computer science, despite its name, is not a natural science in the classic sense of the term. Studying computation is not the same thing as studying lightning: the latter is an actual physical phenomenon, whereas the former is an abstraction. Despite some attempts, research in computer science doesn’t follow the scientific method. Even in the empirical study of, let’s say neural networks, the object of study are built and tweaked as we go along, not a pre-existing phenomenon.
But computation is not just an abstraction in the Pure Mathematics sense of the world either. Most researchers in computer science I know (me included) think that our research, however abstract it might be, actually tells something about the physical world. So computer science is not a natural science, in the sense that studying computation doesn’t fall easily under the scientific method; but it’s also not a domain of pure mathematics, because it pretends to say something about the world we live in.
One way to think about it is that computer science studies which feats of engineering are possible, and which aren’t. But I’m not completely satisfied with this, because it does lack the component that computation or learning in ML do seem to capture something fundamental happening in the physical world.
Why do I think that this question is of similar importance to the ones that you’re mentioning in your post? Because knowing what kind of knowledge a field of research produces, and what it can be useful for, is fundamental to using the fruits of this research. Even more for the field including AI and AI Safety, knowing what the research we produce actually means might make the difference between having the guarantees we want, and just fooling ourselves into thinking that we have them.
But I’m not completely satisfied with this, because it does lack the component that computation or learning in ML do seem to capture something fundamental happening in the physical world.
Why do I think that this question is of similar importance to the ones that you’re mentioning in your post? Because knowing what kind of knowledge a field of research produces, and what it can be useful for, is fundamental to using the fruits of this research.
It seems to me that computer science is ontologically prior to physics, but not by as much as mathematics, in kinda the same way that statistical mechanics and thermodynamics are (but maybe a little further up the chain of abstaction). The laws of thermodynamics hold in a large class of possible universes with a wide range of possible physical laws, but very far from all possible universes with all possible laws. If they study something physical, maybe it is something about constraints on the general mathematical character of physical law in our universe; that seems to be how the implications cash out for quantum information theory, too.
I’m slightly disappointed, because I thought you were going to write a long post on philosophy of science applied to AI. But this one is still a pretty good post.
About the content, I completely agree with you. Even more, I think there is a role of philosophy of science that you don’t mention, and that is actually important in AI: grounding the kind of knowledge produced by computer science.
I mean, you’re using the word science here to talk about AI, but computer science, despite its name, is not a natural science in the classic sense of the term. Studying computation is not the same thing as studying lightning: the latter is an actual physical phenomenon, whereas the former is an abstraction. Despite some attempts, research in computer science doesn’t follow the scientific method. Even in the empirical study of, let’s say neural networks, the object of study are built and tweaked as we go along, not a pre-existing phenomenon.
But computation is not just an abstraction in the Pure Mathematics sense of the world either. Most researchers in computer science I know (me included) think that our research, however abstract it might be, actually tells something about the physical world. So computer science is not a natural science, in the sense that studying computation doesn’t fall easily under the scientific method; but it’s also not a domain of pure mathematics, because it pretends to say something about the world we live in.
One way to think about it is that computer science studies which feats of engineering are possible, and which aren’t. But I’m not completely satisfied with this, because it does lack the component that computation or learning in ML do seem to capture something fundamental happening in the physical world.
Why do I think that this question is of similar importance to the ones that you’re mentioning in your post? Because knowing what kind of knowledge a field of research produces, and what it can be useful for, is fundamental to using the fruits of this research. Even more for the field including AI and AI Safety, knowing what the research we produce actually means might make the difference between having the guarantees we want, and just fooling ourselves into thinking that we have them.
It seems to me that computer science is ontologically prior to physics, but not by as much as mathematics, in kinda the same way that statistical mechanics and thermodynamics are (but maybe a little further up the chain of abstaction). The laws of thermodynamics hold in a large class of possible universes with a wide range of possible physical laws, but very far from all possible universes with all possible laws. If they study something physical, maybe it is something about constraints on the general mathematical character of physical law in our universe; that seems to be how the implications cash out for quantum information theory, too.