what mechanism, within the code, causes the algorithm to consider some data or not
I like this way to express it. This seems like a successful way to taboo various antropomorphic concepts.
Unfortunately, I don’t understand the distinction between “should do next?” and “should do next to resolve the problem?”. Is the AI supposed to do something else besides solving the users’ problems? Is it supposed to consist of two subsystems: one of them is a general problem solver, and the other one is some kind of a gatekeeper saying: “you are allowed to think about this, but not allowed to think about that?”. If yes, then who decides what data the gatekeeper is allowed to consider? Is gatekeeper the less smart part of the AI? Is the general-problem-solving part allowed to model the gatekeeper?
I wrote an example I erased, based on a possibly apocryphal anecdote by Richard Feynman I am recalling from memory, discussing the motivations for working on the Manhattan Project; the original reasons for starting on the project were to beat Germany to building an atomic bomb; after Germany was defeated, the original reason was outdated, but he (and others sharing his motivation) continued working anyways, solving the immediate problem rather than the one they originally intended to solve.
That’s an example of the logical system and the motivational system being in conflict, even if the anecdote doesn’t turn out to be very accurate. I hope it is suggestive of the distinction.
The motivational system -could- be a gatekeeper, but I suspect that would mean there are substantive issues in how the logical system is devised. It should function as an enabler—as the motive force behind all actions taken within the logical system. And yes, in a sense it should be less intelligent than the logical system; if it considers everything to the same extent the logical system does, it isn’t doing its job, it’s just duplicating the efforts of the logical system.
That is, I’m regarding an ideal motivational system as something that drives the logical system; the logical system shouldn’t be -trying- to trick its motivational system, in something the same way and for the same reason you shouldn’t try to convince yourself of a falsehood.
The issue in describing this is that I can think of plenty of motivational systems, but none which do what we want here. (Granted, if I could, the friendly AI problem might be substantively solved.) I can’t even say for certain that a gatekeeper motivator wouldn’t work.
Part of my mental model of this functional dichotomy, however, is that the logical system is stateless—if the motivational system asks it to evaluate its own solutions, it has to do so only with the information the motivational system gives it. The communication model has a very limited vocabulary. Rules for the system, but not rules for reasoning, are encoded into the motivational system, and govern its internal communications only. The logical system goes as far as it can with what it has, produces a set of candidate solutions and unresolved problems, and passes these back to the motivational system. Unresolved problems might be passed back with additional information necessary to resolve them, depending on the motivational system’s rules.
So in my model-of-my-model, an Asimov-syle AI might hand a problem to its logical system, get several candidate solutions back, and then pass those candidate solutions back into the logical system with the rules of robotics, one by one, asking if this action could violate each rule in turn, discarding any candidate solutions which do.
Manual motivational systems are also conceptually possible, although probably too slow to be of much use.
[My apologies if this response isn’t very good; I’m running short on time, and don’t have any more time for editing, and in particular for deciding which pieces to exclude.]
I like this way to express it. This seems like a successful way to taboo various antropomorphic concepts.
Unfortunately, I don’t understand the distinction between “should do next?” and “should do next to resolve the problem?”. Is the AI supposed to do something else besides solving the users’ problems? Is it supposed to consist of two subsystems: one of them is a general problem solver, and the other one is some kind of a gatekeeper saying: “you are allowed to think about this, but not allowed to think about that?”. If yes, then who decides what data the gatekeeper is allowed to consider? Is gatekeeper the less smart part of the AI? Is the general-problem-solving part allowed to model the gatekeeper?
I wrote an example I erased, based on a possibly apocryphal anecdote by Richard Feynman I am recalling from memory, discussing the motivations for working on the Manhattan Project; the original reasons for starting on the project were to beat Germany to building an atomic bomb; after Germany was defeated, the original reason was outdated, but he (and others sharing his motivation) continued working anyways, solving the immediate problem rather than the one they originally intended to solve.
That’s an example of the logical system and the motivational system being in conflict, even if the anecdote doesn’t turn out to be very accurate. I hope it is suggestive of the distinction.
The motivational system -could- be a gatekeeper, but I suspect that would mean there are substantive issues in how the logical system is devised. It should function as an enabler—as the motive force behind all actions taken within the logical system. And yes, in a sense it should be less intelligent than the logical system; if it considers everything to the same extent the logical system does, it isn’t doing its job, it’s just duplicating the efforts of the logical system.
That is, I’m regarding an ideal motivational system as something that drives the logical system; the logical system shouldn’t be -trying- to trick its motivational system, in something the same way and for the same reason you shouldn’t try to convince yourself of a falsehood.
The issue in describing this is that I can think of plenty of motivational systems, but none which do what we want here. (Granted, if I could, the friendly AI problem might be substantively solved.) I can’t even say for certain that a gatekeeper motivator wouldn’t work.
Part of my mental model of this functional dichotomy, however, is that the logical system is stateless—if the motivational system asks it to evaluate its own solutions, it has to do so only with the information the motivational system gives it. The communication model has a very limited vocabulary. Rules for the system, but not rules for reasoning, are encoded into the motivational system, and govern its internal communications only. The logical system goes as far as it can with what it has, produces a set of candidate solutions and unresolved problems, and passes these back to the motivational system. Unresolved problems might be passed back with additional information necessary to resolve them, depending on the motivational system’s rules.
So in my model-of-my-model, an Asimov-syle AI might hand a problem to its logical system, get several candidate solutions back, and then pass those candidate solutions back into the logical system with the rules of robotics, one by one, asking if this action could violate each rule in turn, discarding any candidate solutions which do.
Manual motivational systems are also conceptually possible, although probably too slow to be of much use.
[My apologies if this response isn’t very good; I’m running short on time, and don’t have any more time for editing, and in particular for deciding which pieces to exclude.]