“Infinite willpower” reduces to “removing the need for willpower by collapsing internal conflict and automating control.” Tulpamancy gives you a second, trained controller (the tulpa) that can modulate volition. That controller can endorse enact a policy.
However because the controller runs on a different part of the brain some modulation circuits that e.g. make you feel tired or demotivated are bypassed. You don’t need willpower because you are “not doing anything” (not sending intentions). The tulpa is. And the neuronal circuits the tulpa runs on—which generate intentions to steer that ultimately turn into mental and/or muscle movements—are not modulated by the willpower circuits at all.
Gears-level model
First note that willpower is totally different from fatigue.
What “willpower” actually is
“Willpower” is what it feels like when you select a policy that loses in the default competition but you force it through anyway. That subjective burn comes from policy conflict plus low confidence in the chosen policy. If the task policy has low probability to produce only a low reward-value, and competitors (scrolling, snacks, daydreams) have high probability to produce a high reward-value, you pay a tax to hold the line.
Principle: Reduce conflict and increase precision/reward for the target policy and “willpower” isn’t consumed; it’s unnecessary. (This is the non-tulpa way.)
What a tulpa gives you that ordinary in addition of infinite willpower:
Social presence reliably modulates effort, arousal, and accountability. A tulpa isn’t just “thoughts”; it is multi-modal: voice, visuals, touch, felt presence. That gives it many attachment points into your control stack:
Valuation channel: A tulpa can inject positive interpretation in the form of micro-rewards (“good, job”, “you can do it, I believe in you”); aka generate positive reinforcement.
Interoceptive channel: A tulpa can invoke states associated with alertness or calm. The tulpa can change your mental state from “I want to lay on the floor because I am so exhausted” to “I don’t feel tired at all” in 2 seconds.
Motor scaffolding: IA can execute “starter” actions (get out of bed, open editor, type first sentence), reducing the switch/initialization cost where most akrasia lives (because infinite willpower).
The central guiding principle is to engineer the control stack so endorsed action is default, richly rewarded, and continuously stabilized. Tulpamancy gives you a second, controller with social authority and multi-modal access to your levers. This controller can just overwrite your mental state and has no willpower constraints.
The optimum policy probably includes using the sledgehammer of overwriting your mental state, as well as optimizing to adopt the target policy that you actually endorse wholeheartedly at the same time.
Infinite Willpower
“Infinite willpower” reduces to “removing the need for willpower by collapsing internal conflict and automating control.” Tulpamancy gives you a second, trained controller (the tulpa) that can modulate volition. That controller can endorse enact a policy.
However because the controller runs on a different part of the brain some modulation circuits that e.g. make you feel tired or demotivated are bypassed. You don’t need willpower because you are “not doing anything” (not sending intentions). The tulpa is. And the neuronal circuits the tulpa runs on—which generate intentions to steer that ultimately turn into mental and/or muscle movements—are not modulated by the willpower circuits at all.
Gears-level model
First note that willpower is totally different from fatigue.
What “willpower” actually is
“Willpower” is what it feels like when you select a policy that loses in the default competition but you force it through anyway. That subjective burn comes from policy conflict plus low confidence in the chosen policy. If the task policy has low probability to produce only a low reward-value, and competitors (scrolling, snacks, daydreams) have high probability to produce a high reward-value, you pay a tax to hold the line.
Principle: Reduce conflict and increase precision/reward for the target policy and “willpower” isn’t consumed; it’s unnecessary. (This is the non-tulpa way.)
What a tulpa gives you that ordinary in addition of infinite willpower:
Social presence reliably modulates effort, arousal, and accountability. A tulpa isn’t just “thoughts”; it is multi-modal: voice, visuals, touch, felt presence. That gives it many attachment points into your control stack:
Valuation channel: A tulpa can inject positive interpretation in the form of micro-rewards (“good, job”, “you can do it, I believe in you”); aka generate positive reinforcement.
Interoceptive channel: A tulpa can invoke states associated with alertness or calm. The tulpa can change your mental state from “I want to lay on the floor because I am so exhausted” to “I don’t feel tired at all” in 2 seconds.
Motor scaffolding: IA can execute “starter” actions (get out of bed, open editor, type first sentence), reducing the switch/initialization cost where most akrasia lives (because infinite willpower).
The central guiding principle is to engineer the control stack so endorsed action is default, richly rewarded, and continuously stabilized. Tulpamancy gives you a second, controller with social authority and multi-modal access to your levers. This controller can just overwrite your mental state and has no willpower constraints.
The optimum policy probably includes using the sledgehammer of overwriting your mental state, as well as optimizing to adopt the target policy that you actually endorse wholeheartedly at the same time.