Theoretical question. I have a character in a fanfic I’m writing with an ‘infinite will’ superpower they can turn on and off. Besides the obvious problems with forgetting to eat/sleep etc, what realistic downsides should overuse of this power have?
Besides the obvious problems with forgetting to eat/sleep etc, what realistic downsides should overuse of this power have?
Extrapolate from stories about Adderall: infinite willpower for doing the wrong thing. The character winds up spending a year perfecting building toothpick castles eg.
I’ll generalize from my own observations: I’m pretty sure there were many instances where, had I infinite willpower, I would have gotten some rather nice things done… but with some gaping flaws that I wouldn’t notice until much later, flaws which would probably be pretty hard to mend without starting over. The sorta-kinda advantage to not being able to exert infinite will is the opportunity for plans to get gradually refined in the background, or for incidental side things to get involved, etc.
Metaphorical Tunnel vision, more or less.
So, for example, if this character is using their infinite willpower to construct an AGI, they’d be more likely to brute-force through it and end up with an UFAI.
Do you actually mean “infinite will” or “infinite ability to focus”?
Will—that is, the ability of the rational mind / ego / neocortex to override anything coming from lower levels—doesn’t lead people to forget to eat or sleep. Focus does.
I didn’t realise that. What is the evolutionary reason infinite willpower would be a bad idea then? The character was meant to have infinite willpower, but the plot I had in my mind was that they jump at the chance only to find that it was a bad idea. The character who gains it is too stupid to think of the consequences.
Was it a bad idea for a plot, then? Are there are no downsides?
What is the evolutionary reason infinite willpower would be a bad idea then?
Infinite willpower implies that you don’t have to listen to what your body and your subconscious are telling you. Evolutionary speaking it’s a bad idea to ignore your body telling you it’s damaged and will break down soon while your mind is soaring through n-dimensional algebra.
they jump at the chance only to find that it was a bad idea. The character who gains it is too stupid to think of the consequences.
I’d just call it “stupid” :-) Alternatively you can think about it as confirmation bias: once you commit to an idea or an approach you ignore/deny/discount evidence that tells you that idea is wrong.
The downsides of not updating one’s beliefs on new evidence are rather obvious.
“Willpower” is kind of a vague concept. Perhaps being able to always do what you think you ought to do? In that case, the downsides would perhaps involve problems of rationalization, or weaknesses in his reason, creating mismatches between what he thinks he ought to do and what will actually produce the best consequences. Maybe his laziness, squeamishness, and fear actually would prevent him from doing things he thinks he should do, but which actually would produce greater harm that he doesn’t anticipate, or his anger, lust, etc. would motivate him to do things with beneificial consequences he doesn’t anticipate, and so he ends up worse off when all that gets overridden.
My quick ideas are something like going from vacillating between exploration and exploitation strategies into pursuing pure exploitation and missing out on exploration paths, and generally ending up in failure modes of Hofstadterian sphexishness, where you doggedly keep at the same behavior when it obviously isn’t working right in the circumstances. Though you could also use the power to pursue deliberate exploration and thinking things through, so it’s not quite that simple.
Still, now there’s the meta-problem of whether the thing you’re trying to think through is the best thing to be thinking about. For a historical analogue, would this end up with you staying up for weeks cranking out the best machine code with 1950s computers while people without the superpower decide that cranking out machine code for weeks at an end is horrible and go ahead to invent Lisp and Fortran? If you resolve to do something that’s too difficult, like proving whether P != NP, will you ever stop? How will you figure out the correct level to figure out things, so that you neither end up arranging pebbles instead of inventing a compiler nor end up trying to solve all of philosophy and never get anywhere because it’s too hard for you?
How do you end up deciding to turn the power off anyway once you’ve turned it on? I guess there could be a natural control there, like you eventually falling unconscious from lack of sleep and waking up with the power turned off. What if there isn’t?
Theoretical question. I have a character in a fanfic I’m writing with an ‘infinite will’ superpower they can turn on and off. Besides the obvious problems with forgetting to eat/sleep etc, what realistic downsides should overuse of this power have?
Extrapolate from stories about Adderall: infinite willpower for doing the wrong thing. The character winds up spending a year perfecting building toothpick castles eg.
I’ll generalize from my own observations: I’m pretty sure there were many instances where, had I infinite willpower, I would have gotten some rather nice things done… but with some gaping flaws that I wouldn’t notice until much later, flaws which would probably be pretty hard to mend without starting over. The sorta-kinda advantage to not being able to exert infinite will is the opportunity for plans to get gradually refined in the background, or for incidental side things to get involved, etc.
Metaphorical Tunnel vision, more or less.
So, for example, if this character is using their infinite willpower to construct an AGI, they’d be more likely to brute-force through it and end up with an UFAI.
Do you actually mean “infinite will” or “infinite ability to focus”?
Will—that is, the ability of the rational mind / ego / neocortex to override anything coming from lower levels—doesn’t lead people to forget to eat or sleep. Focus does.
I didn’t realise that. What is the evolutionary reason infinite willpower would be a bad idea then? The character was meant to have infinite willpower, but the plot I had in my mind was that they jump at the chance only to find that it was a bad idea. The character who gains it is too stupid to think of the consequences.
Was it a bad idea for a plot, then? Are there are no downsides?
Infinite willpower implies that you don’t have to listen to what your body and your subconscious are telling you. Evolutionary speaking it’s a bad idea to ignore your body telling you it’s damaged and will break down soon while your mind is soaring through n-dimensional algebra.
I’d just call it “stupid” :-) Alternatively you can think about it as confirmation bias: once you commit to an idea or an approach you ignore/deny/discount evidence that tells you that idea is wrong.
The downsides of not updating one’s beliefs on new evidence are rather obvious.
“Willpower” is kind of a vague concept. Perhaps being able to always do what you think you ought to do? In that case, the downsides would perhaps involve problems of rationalization, or weaknesses in his reason, creating mismatches between what he thinks he ought to do and what will actually produce the best consequences. Maybe his laziness, squeamishness, and fear actually would prevent him from doing things he thinks he should do, but which actually would produce greater harm that he doesn’t anticipate, or his anger, lust, etc. would motivate him to do things with beneificial consequences he doesn’t anticipate, and so he ends up worse off when all that gets overridden.
My quick ideas are something like going from vacillating between exploration and exploitation strategies into pursuing pure exploitation and missing out on exploration paths, and generally ending up in failure modes of Hofstadterian sphexishness, where you doggedly keep at the same behavior when it obviously isn’t working right in the circumstances. Though you could also use the power to pursue deliberate exploration and thinking things through, so it’s not quite that simple.
Still, now there’s the meta-problem of whether the thing you’re trying to think through is the best thing to be thinking about. For a historical analogue, would this end up with you staying up for weeks cranking out the best machine code with 1950s computers while people without the superpower decide that cranking out machine code for weeks at an end is horrible and go ahead to invent Lisp and Fortran? If you resolve to do something that’s too difficult, like proving whether P != NP, will you ever stop? How will you figure out the correct level to figure out things, so that you neither end up arranging pebbles instead of inventing a compiler nor end up trying to solve all of philosophy and never get anywhere because it’s too hard for you?
How do you end up deciding to turn the power off anyway once you’ve turned it on? I guess there could be a natural control there, like you eventually falling unconscious from lack of sleep and waking up with the power turned off. What if there isn’t?