Obviously they will have competing objectives interiorly, just like humans do.
This is not so obvious to me.
Humans are a product of evolution, so it makes sense to have various trackers of “things that can hurt us” (such as hunger, low social status, etc.), where each gives a simple advice, but sometimes the different pieces of advice contradict (you are really hungry, but in a situation where admitting it would lower your status).
Computers follow an algorithm. If the algorithm is “for each possible token, calculate the probability of its appearing in a text, then write the token with the greatest probability”, there is not much of a potential for internal conflict.
This is not so obvious to me.
Humans are a product of evolution, so it makes sense to have various trackers of “things that can hurt us” (such as hunger, low social status, etc.), where each gives a simple advice, but sometimes the different pieces of advice contradict (you are really hungry, but in a situation where admitting it would lower your status).
Computers follow an algorithm. If the algorithm is “for each possible token, calculate the probability of its appearing in a text, then write the token with the greatest probability”, there is not much of a potential for internal conflict.