Earlier, I explained to someone that most of the problems in Eliezer’s forthcoming Open Problems in Friendly AI sequence are still at the stage of being philosophy problems. Why, then, do Louie and I talk about FAI being “mostly a math problem, not a programming problem”?
The point we’re trying to make is that Friendly AI, as we understand it, isn’t chiefly about hiring programmers and writing code. Instead, it’s mostly about formalizing problems (in reflective reasoning, decision theory, etc.) into math problems, and then solving those math problems. The formalization step itself will likely require the invention of new math — not so much clever programming tricks (though those may be required at a later stage).
Most of the “open FAI problems” are still at the stage of philosophy because, as Louie says, “most of the theory behind self-reference, decision theory, and general AI techniques haven’t been formalized.… yet.” But we think those philosophical problems will be formalized into math problems, sometimes requiring new math.
So we’re not (at this stage) looking to hire great programmers. We’re looking to hire great mathematicians; even though most of the problems are still at the “philosophy” stage of formalization.
The specific practice of turning philosophy into math is a key feature of the field called formal epistemology, but Louie and I couldn’t find any “standard” classes or textbooks on formal epistemology that we would strongly recommend, so we had to leave it off the list for now.
Examples of turning philosophy into math include VNM’s formalization of “rationality” into axiomatic utility theory, and Shannon’s formalization of information concepts with his information theory.
will likely require the invention of new fundamental math
I wish you’d stop saying that (without justification or clarification). Modern math seems quite powerful enough to express most problems, so the words “new” and “fundamental” sound somewhat suspicious. Is this “new fundamental math” something like the invention of category theory? Probably not. Clarifying the topic of Friendly AI would almost certainly involve nontrivial mathematical developments, but in the current state of utter confusion it seems premature to characterize these developments as “fundamental”.
We don’t know how it turns out, what we know is that only a mathematical theory would furnish an accurate enough understanding of the topic, and so it seems to be a good heuristic to have mathematicians work on the problem, because non-mathematicians probably won’t be able to develop a mathematical theory. In addition, we have some idea about the areas where additional training might be helpful, such as logic, type theory, formal languages, probability and computability.
You’re right, the word “fundamental” might suggest the wrong kinds of things. I’m not at all confident that Friendly AI will require the invention of something like category theory. So, I’ve removed the word “fundamental” from the above comment.
Allow me to add another clarification.
Earlier, I explained to someone that most of the problems in Eliezer’s forthcoming Open Problems in Friendly AI sequence are still at the stage of being philosophy problems. Why, then, do Louie and I talk about FAI being “mostly a math problem, not a programming problem”?
The point we’re trying to make is that Friendly AI, as we understand it, isn’t chiefly about hiring programmers and writing code. Instead, it’s mostly about formalizing problems (in reflective reasoning, decision theory, etc.) into math problems, and then solving those math problems. The formalization step itself will likely require the invention of new math — not so much clever programming tricks (though those may be required at a later stage).
Most of the “open FAI problems” are still at the stage of philosophy because, as Louie says, “most of the theory behind self-reference, decision theory, and general AI techniques haven’t been formalized.… yet.” But we think those philosophical problems will be formalized into math problems, sometimes requiring new math.
So we’re not (at this stage) looking to hire great programmers. We’re looking to hire great mathematicians; even though most of the problems are still at the “philosophy” stage of formalization.
The specific practice of turning philosophy into math is a key feature of the field called formal epistemology, but Louie and I couldn’t find any “standard” classes or textbooks on formal epistemology that we would strongly recommend, so we had to leave it off the list for now.
Examples of turning philosophy into math include VNM’s formalization of “rationality” into axiomatic utility theory, and Shannon’s formalization of information concepts with his information theory.
I wish you’d stop saying that (without justification or clarification). Modern math seems quite powerful enough to express most problems, so the words “new” and “fundamental” sound somewhat suspicious. Is this “new fundamental math” something like the invention of category theory? Probably not. Clarifying the topic of Friendly AI would almost certainly involve nontrivial mathematical developments, but in the current state of utter confusion it seems premature to characterize these developments as “fundamental”.
We don’t know how it turns out, what we know is that only a mathematical theory would furnish an accurate enough understanding of the topic, and so it seems to be a good heuristic to have mathematicians work on the problem, because non-mathematicians probably won’t be able to develop a mathematical theory. In addition, we have some idea about the areas where additional training might be helpful, such as logic, type theory, formal languages, probability and computability.
You’re right, the word “fundamental” might suggest the wrong kinds of things. I’m not at all confident that Friendly AI will require the invention of something like category theory. So, I’ve removed the word “fundamental” from the above comment.
Developing new fundamental math is hard. SI may have to do it, but keep it to a minimum if you want to win!