If you’re talking about the two-stage model, I’m aware of it but haven’t read their original writings. Still, I don’t see how that could possibly help make my choices more or less “free” in any sense I care about for any philosophical, motivational, or moral reason.
If I am deterministically selecting among options generated within myself by an indeterministic process, sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not. (Yes, I realize I am essentially rejecting the idea that I should consider the option-generating indeterministic process to be part of “me.” Maybe that’s a mistake,, but that’s how my me-concept is (currently) shaped.).
To put it another way: Imagine I am playing a game where I (deterministically) deliberate and choose among options presented to me. Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not? Why does it depend on whether the option-generating indeterministic module is located inside or outside my body?
Separately, I also have a hard time with the idea that this implies that the question of free will could depend on which version of quantum mechanics is (more) true, because if Many Worlds is correct then it is no longer true that the future is indeterministic; instead it is only true that different parts of current-me will (deterministically) no longer be in communication with one another in the future.
(Continuing with the game-themed thought experiments because they’re readily available and easy to describe) This idea feels as strange to me as it would be to say that a contestant’s answers on Who Wants to Be a Millionaire become more or less free if you take away or use the 50-50 lifeline. I don’t mean that to be flippant. In some sense, it’s true—all of a sudden there are fewer options to freely choose among. But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked, and certain options but not others will disappear. To me that seems like a strange hook to hang my self-concept, will, and moral responsibility from.
If I am deterministically selecting among options generated within myself by an indeterministic process,
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not?
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked
Why would it be a deterministic fact in an indeterministic world?
Indeterminism is, tautologously, freedom from determinism.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Why would it be a deterministic fact in an indeterministic world?
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
Free will isn’t a point of difference among options,
No, it’s a point about whether there are options.
It’s also very different from retaining the ability to continue to steer and course correct.
Which you can’t “retain”, since you never had it, under determinism.
If you’re talking about the two-stage model, I’m aware of it but haven’t read their original writings. Still, I don’t see how that could possibly help make my choices more or less “free” in any sense I care about for any philosophical, motivational, or moral reason.
If I am deterministically selecting among options generated within myself by an indeterministic process, sure that’s possible, and I appreciate that it’s an actual question we could find an answer to. But, I’ve never been able to see why I might prefer that situation to deterministically choosing among states generated by any other process that’s outside my control, whether it happens inside by body or not, whether it’s deterministic or not. (Yes, I realize I am essentially rejecting the idea that I should consider the option-generating indeterministic process to be part of “me.” Maybe that’s a mistake,, but that’s how my me-concept is (currently) shaped.).
To put it another way: Imagine I am playing a game where I (deterministically) deliberate and choose among options presented to me. Why does the question of whether my choice is free or not depend on whether the process that generated the list of options is deterministic or not? Why does it depend on whether the option-generating indeterministic module is located inside or outside my body?
Separately, I also have a hard time with the idea that this implies that the question of free will could depend on which version of quantum mechanics is (more) true, because if Many Worlds is correct then it is no longer true that the future is indeterministic; instead it is only true that different parts of current-me will (deterministically) no longer be in communication with one another in the future.
(Continuing with the game-themed thought experiments because they’re readily available and easy to describe) This idea feels as strange to me as it would be to say that a contestant’s answers on Who Wants to Be a Millionaire become more or less free if you take away or use the 50-50 lifeline. I don’t mean that to be flippant. In some sense, it’s true—all of a sudden there are fewer options to freely choose among. But it is also a deterministic fact about the world that at some point in the future, that lifeline may be invoked, and certain options but not others will disappear. To me that seems like a strange hook to hang my self-concept, will, and moral responsibility from.
I didn’t say that was the case any more than indeterministically choosing between deterministically generated options.
In the big picture, this is happening in an indeterministic universe. So.what you get is really being able to change things ..to bring about futures that aren’t inevitable; and you being able to change things..the causal chain begins at you.
Indeterminism is, tautologously, freedom from determinism. The standard argument against libertarian free will depends on the universe working in a certain way, ie. being deterministic. The claim that libertarian free will depends on the universe being indeterministic is a corollary.
Why would it be a deterministic fact in an indeterministic world?
Yes, and determinism isn’t the thing I want freedom from. External control is, mostly.
The “may” is important there, and I intended it to be a probabilistic may, not a permission-granting may. It is a deterministic fact that it might be invoked, not that it necessarily will.
Values differ. But it’s strange for rationalists not to care about the openness of the future when the whole AI safety thing is about steering towards a non dystopian future.
Yes, agreed, that is one of the points of disagreement about free will. I find it more strange to think the future is more steerable in a world where you can’t predict the outcomes of actions even in principle.
In the case of steering the future and AI, the thing in question is more about who is doing the steering, less about the gears-level question of how steering works as a concept. It’s similar to how a starving man cares more about getting a loaf of bread than he does about getting a lesson on the biochemistry of fermentation. Whether humans or AIs or aliens decide the direction of the future, they all do so from within the same universal laws and mechanisms. Free will isn’t a point of difference among options, and it isn’t a lever anyone can pull that affects what needs to be done.
I am also happy to concede, that yes, creating an unfriendly AI that kills all humans is a form of steering the future. Right off a cliff, one time. That’s very different than steering in a direction I want to steer (or be steered) in. It’s also very different from retaining the ability to continue to steer and course correct.
Determinism doesn’t give you perfect predictive ability, since you can still have limitations of cognition and information Indeterminism doesn’t have to take it away, either: it’s a feature of two-stage theories that the indeterminism is mostly at the decision making stage, not the decision-executing stage.
Says who? If we are predetermined to be killed bi ASI, that’s that—all outr current efforts are in vain.
No, it’s a point about whether there are options.
Which you can’t “retain”, since you never had it, under determinism.