I’m not sure what the hypothetical Objectivist ‘should do’, but I believe the options they have to choose from are:
(1) Choose to follow the full argument (in which case everything that it said made sense)
and they are no longer an Objectivist
or
(2) Choose to not follow the full argument (in which case some stuff didn’t make sense)
and they remain an Objectivist
In some sense, this is the case already. People are free to believe whatever they like. They can choose to research their beliefs and challenge them more. They might read things that convince them to change their position. If they do, are they “compelled” are they “forced”? I think they are in a way. I think this is a good type of control. Control by rational persuasion.
For the question of whether an external agent should impose its beliefs onto an agent choosing option (2), I think the answer is ‘no’. This is oppression.
I think the question you are getting at is, “Should a digital copy of yourself be able to make you do what you would be doing if you were smarter?”.
Most would say no, for obvious reasons. Nobody wants their AI bossing them around. This is mostly because we typically control other agents (boss them around) by force. We use rules and consequences.
What I’m suggesting, is that we will get so much better at controlling things through rational persuasion, that force will not be required for control. All that the ‘smarter version of yourself’ does is tell you what you probably need to hear. When you need to hear it. Like your conscience.
It’s important to retain the right to choose to listen to it.
In general, I see the alignment problem as a category error. There is no way to align artificial intelligence. AI isn’t really what we want to build. We want to build an oracle that can tell us everything. That’s a collective intelligence. A metaphorical brain that represents society by treating each member as a nerve on its spinal cord.
I’m not sure I can imagine a concrete example of an instance where both (1) everything that it said made sense and (2) I am not able to follow the full argument.
Maybe you could give me an example of a scenario?
I believe, if the alignment bandwidth is high enough, it should be the case that whatever an external agent does could be explained to ‘the host’ if that were what the host desired.