Yes, you could reward and punish recalcitrant subagents separately from other subagents. Is this an example of what you’re talking about?
or she allows herself to sleep in once a week
In practice, it might be hard to target the relevant subagent and keep the reward/punishment confined in that subagent’s domain. Other than occasionally allowing more indulgence, I’m not sure how to do it. Any ideas?
That would be something like “satiating” them, i.e. a different model of how these things work.
Conditioning would be allowing yourself to sleep in tomorrow only if you wake up on time today, or intentionally depriving yourself of sleep the day after you sleep through the alarm. This assumes that your subagents actually be conditionable, which is implied if you treat them like they’re conscious. But I’m not at all convinced that’s the case—this is just a thought experiment.
In practice, it might be hard to target the relevant subagent and keep the reward/punishment confined in that subagent’s domain.
Given the unrealistic “they’re like individuals” model, operant conditioning should work fine—yes, you’ll also punish the other agents who like sleep, but by definition you’re giving the most punishment to the ones who were most responsible for you sleeping in.
But I think our brains would work a lot differently if we had conscious subagents running around in it. The evidence I can think of points to, at a minimum, these subagents being really stupid, which I think favors the hypothesis that we’re really us, just that we follow a list of rules rather than always being rational.
Thanks for the link.
Yes, you could reward and punish recalcitrant subagents separately from other subagents. Is this an example of what you’re talking about?
In practice, it might be hard to target the relevant subagent and keep the reward/punishment confined in that subagent’s domain. Other than occasionally allowing more indulgence, I’m not sure how to do it. Any ideas?
That would be something like “satiating” them, i.e. a different model of how these things work.
Conditioning would be allowing yourself to sleep in tomorrow only if you wake up on time today, or intentionally depriving yourself of sleep the day after you sleep through the alarm. This assumes that your subagents actually be conditionable, which is implied if you treat them like they’re conscious. But I’m not at all convinced that’s the case—this is just a thought experiment.
Given the unrealistic “they’re like individuals” model, operant conditioning should work fine—yes, you’ll also punish the other agents who like sleep, but by definition you’re giving the most punishment to the ones who were most responsible for you sleeping in.
But I think our brains would work a lot differently if we had conscious subagents running around in it. The evidence I can think of points to, at a minimum, these subagents being really stupid, which I think favors the hypothesis that we’re really us, just that we follow a list of rules rather than always being rational.