There are two parts to AGI: consequentialist reasoning and preference.
Humans have feeble consequentialist abilities, but can use computers to implement huge calculations, if the problem statement can be entered in the computer. For example, you can program the material and mechanical laws in an engineering application, enter a building plan, and have the computer predict what’s going to happen to it, or what parameters should be used in the construction so that the outcome is as required. That’s the power outside human mind, directed by the correct laws, and targeted at the formally specified problem.
When you consider AGI in isolation, it’s like an engineering application with a random building plan: it can powerfully produce a solution, but it’s not a solution to the problem you need solving. Nonetheless, this part is essential when you do have an ability to specify the problem. And that’s the AI’s algorithm, one aspect of which is decision-making. It’s separate from the problem statement that comes from human nature.
For an engineering program, you can say that the computer is basically doing what a person would do if they had crazy amount of time and machine patience. But that’s because a person can know both problem statement and laws of inference formally, which is the way it was programmed in the computer in the first place.
With human preference, the problem statement isn’t known explicitly to people. People can use preference, but can’t state this whole object explicitly. A moral machine would need to work with preference, but human programmers can’t enter it, and neither can they do what a machine would be able to do given a formal problem statement, because humans can’t know this problem statement, it’s too big. It could exist in a computer explicitly, but it can’t be entered there by programmers.
So, here is a dilemma: problem statement (preference) resides in the structure of human mind, but the strong power of inference doesn’t, while the strong power of inference (potentially) exists in computers outside human minds, where the problem statement can’t be manually transmitted. Creating FAI requires these components to meet in the same system, but it can’t be done in a way other kinds of programming are done.
There are two parts to AGI: consequentialist reasoning and preference.
Humans have feeble consequentialist abilities, but can use computers to implement huge calculations, if the problem statement can be entered in the computer. For example, you can program the material and mechanical laws in an engineering application, enter a building plan, and have the computer predict what’s going to happen to it, or what parameters should be used in the construction so that the outcome is as required. That’s the power outside human mind, directed by the correct laws, and targeted at the formally specified problem.
When you consider AGI in isolation, it’s like an engineering application with a random building plan: it can powerfully produce a solution, but it’s not a solution to the problem you need solving. Nonetheless, this part is essential when you do have an ability to specify the problem. And that’s the AI’s algorithm, one aspect of which is decision-making. It’s separate from the problem statement that comes from human nature.
For an engineering program, you can say that the computer is basically doing what a person would do if they had crazy amount of time and machine patience. But that’s because a person can know both problem statement and laws of inference formally, which is the way it was programmed in the computer in the first place.
With human preference, the problem statement isn’t known explicitly to people. People can use preference, but can’t state this whole object explicitly. A moral machine would need to work with preference, but human programmers can’t enter it, and neither can they do what a machine would be able to do given a formal problem statement, because humans can’t know this problem statement, it’s too big. It could exist in a computer explicitly, but it can’t be entered there by programmers.
So, here is a dilemma: problem statement (preference) resides in the structure of human mind, but the strong power of inference doesn’t, while the strong power of inference (potentially) exists in computers outside human minds, where the problem statement can’t be manually transmitted. Creating FAI requires these components to meet in the same system, but it can’t be done in a way other kinds of programming are done.
Something to think about.
This is the clearest statement of the problem FAI that I have read to date.