Why should rational agents deliberately sabotage their ability to understand humans? Merely having a concept of something doesn’t imply applying it to yourself. Not that I even see any noticeable harm in a rational agent applying the concept of a specific precommitment to itself. It might be useful for e. g. modeling itself in hypothesis testing.
Why should rational agents deliberately sabotage their ability to understand humans? Merely having a concept of something doesn’t imply applying it to yourself. Not that I even see any noticeable harm in a rational agent applying the concept of a specific precommitment to itself. It might be useful for e. g. modeling itself in hypothesis testing.
Obviously.