No comment on whether that hypothesis is true, but if it were, I maintain that it is not particularly relevant to AGI alignment. People can be patient in pursuit of good goals like helping a friend, and people can also be patient in pursuit of bad goals like torturing a victim. Likewise, people can act impulsively towards good goals and people can act impulsively towards bad goals.
As to the “why are there two mechanisms that do about the same thing?” I guess this part of the answer:
So if you’re an animal at constant risk of having your behavior hijacked by parasites, what do you do?
First, you make your biological signaling cascades more complicated. You have multiple redundant systems controlling every part of behavior, and have them interact in ways too complicated for any attacker to figure out.
No comment on whether that hypothesis is true, but if it were, I maintain that it is not particularly relevant to AGI alignment. People can be patient in pursuit of good goals like helping a friend, and people can also be patient in pursuit of bad goals like torturing a victim. Likewise, people can act impulsively towards good goals and people can act impulsively towards bad goals.
As to the “why are there two mechanisms that do about the same thing?” I guess this part of the answer:
SSC: Maybe Your Zoloft Stopped Working Because A Liver Fluke Tried To Turn Your Nth-Great-Grandmother Into A Zombie
oh ok i see, it’s more towards rationality than alignment