Your proposal seems to involve throwing out “sophisticated mathematics” in favor of something else more practical, and probably more complex. You can’t do that. Math always wins.
The problem with math is that it’s too powerful: it describes everything, including everything you’re not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s, but we live a world made of clouds and birds, and a machine that talks in binary about clouds and birds would be complicated and hard to find. For a practical AI, you need a model of computation that regards nouns, verbs and people as the building blocks of reality, and regards Turing machines as very weird examples of nouns. This model would perform worse than a Turing machine if presented with a freakish alternate universe with no concept of time or space, but otherwise it’s fine. The hard part is compromising between simplicity and open-mindedness.
The same applies to neural networks. In theory, the shape can be anything you like as long as it’s big enough.(I’m leaving out a lot of details here, sorry.)Math is just the general framework that you build reality inside.
Empirical methods are upside down. You’re starting with the gritty details, hoping that as everything piles up something more powerful than bayesian inference will emerge. That won’t happen. Instead you’ll get a lousy, brittle copy of bayesian inference that can’t handle anything too different from what it was designed for… like a human.
Your proposal seems to involve throwing out “sophisticated mathematics”
I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it’s impressive.
Obviously, in fields like physics math is very, very useful. In other cases, it’s better to just go out and write down what you see. So cartographers make maps, zoologists write field guides, and linguists write dictionaries. Why a priori should we prefer one epistemological scheme to another?
I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it’s impressive.
I’d find it much more impressive if you could do anything useful in AI or computer vision without math.
I think I understand better now.
Your proposal seems to involve throwing out “sophisticated mathematics” in favor of something else more practical, and probably more complex. You can’t do that. Math always wins.
The problem with math is that it’s too powerful: it describes everything, including everything you’re not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s, but we live a world made of clouds and birds, and a machine that talks in binary about clouds and birds would be complicated and hard to find. For a practical AI, you need a model of computation that regards nouns, verbs and people as the building blocks of reality, and regards Turing machines as very weird examples of nouns. This model would perform worse than a Turing machine if presented with a freakish alternate universe with no concept of time or space, but otherwise it’s fine. The hard part is compromising between simplicity and open-mindedness.
The same applies to neural networks. In theory, the shape can be anything you like as long as it’s big enough.(I’m leaving out a lot of details here, sorry.)Math is just the general framework that you build reality inside.
Empirical methods are upside down. You’re starting with the gritty details, hoping that as everything piles up something more powerful than bayesian inference will emerge. That won’t happen. Instead you’ll get a lousy, brittle copy of bayesian inference that can’t handle anything too different from what it was designed for… like a human.
(Edited for grammar)
I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it’s impressive.
Obviously, in fields like physics math is very, very useful. In other cases, it’s better to just go out and write down what you see. So cartographers make maps, zoologists write field guides, and linguists write dictionaries. Why a priori should we prefer one epistemological scheme to another?
I’d find it much more impressive if you could do anything useful in AI or computer vision without math.
What else is there to see besides humans?
Paperclips. Also, paperclip makers. And paperclip maker makers. And paperclip maker maker makers.
And stuff for maintaining paperclip maker maker makers.
And paper?
Maybe.