If this argument makes sense, then AGIs will not necessarily be rational at first. Instead, they may very start out with lots of biases just as we do. However, just as we are likely to attempt to remove our own biases to improve ourselves, the AGI will also attempt to remove its own biases by modifying itself.
If ‘biases’ are a result of being mistuned, then it’s not ‘throwing them away’ so much as correcting.
If ‘biases are heuristics’ - heuristics can be useful (like when you don’t have enough information). (Different domains may call for different heuristics.)
My point was more ‘biases are multiple things’. Different things may require different approaches. I am not sure what many people do that should be thrown away. Such a thing may exist, but it seems less likely, i.e., not your average bias. I could be wrong about that (changes since the ancestral environment, etc.). (Some may argue that being less explorative or more depressed during the winter is one.)
In the context of people, I’m more clear on biases. An AI? Less so.
I agree. Regarding biases that I would like to throw away one day in the future, being careful enough to protect modules important for self-preservation and self-healing, I’d probably like to excessive energy-preserving modules such as ones responsible for laziness, that are only really useful in ancestral environments where food is scarce.
I like your example of senseless winter bias as well. There are probably many examples like that.
If ‘biases’ are a result of being mistuned, then it’s not ‘throwing them away’ so much as correcting.
If ‘biases are heuristics’ - heuristics can be useful (like when you don’t have enough information). (Different domains may call for different heuristics.)
You are right; I should have written that the AGI will “correct” its biases rather write than it will “remove” them.
My point was more ‘biases are multiple things’. Different things may require different approaches. I am not sure what many people do that should be thrown away. Such a thing may exist, but it seems less likely, i.e., not your average bias. I could be wrong about that (changes since the ancestral environment, etc.). (Some may argue that being less explorative or more depressed during the winter is one.)
In the context of people, I’m more clear on biases. An AI? Less so.
I agree. Regarding biases that I would like to throw away one day in the future, being careful enough to protect modules important for self-preservation and self-healing, I’d probably like to excessive energy-preserving modules such as ones responsible for laziness, that are only really useful in ancestral environments where food is scarce.
I like your example of senseless winter bias as well. There are probably many examples like that.