Completely agree! Such possible explanations are the reason why I’m only mildly worried about psychedelic values drift. Cautious curiosity still seems to be the reasonable response.
Well of course I was already familiar with map-territory distinction, and while insightful itself, it wasn’t the insight I grasped from that paragraph.
The new insight is deeper understanding to what degree consciousness is functionally necessary for human behaviour. Literally as necessary as thermostats for air conditioning system. Also, while understanding that I have maps of reality in my consciousness, I suppose, I wasn’t explicitly thinking that my consciousness is itself a map.
It seems reasonable to be extra sceptical towards evidence that is obtained when your evidence-evaluating-engine is distorted and extremely sceptical toward evidence which can only be obtained in such state.
Experiencing divine grace under lsd is as much evidence in favour of god actually existing as witnessing psychic with an earphone telling you the details of your life is an evidence in favour of telepathy. Both performances can be impressive, but the design of the experiments is completely flawed.
I expect rationality adjacent people to understand it. And if they nevertheless change their mind on the subject of religion, this seems mildly disturbing.
Of course it can be completely harmless things. For instance, some people couldn’t have imagined how anyone can be religious, then took some psychedelics and understood the idea of religious experience in principle, which technically made them more religious than they used to be. But I suspect it can be much more dramatic than that. I wish we had more information about such matters.
I’ve heard quite a lot about psychedelic-related values changes. This mildly worries me and makes suspicious about respondents being happy with their personality changes.
One example from the data of this survey. I doubt that people from rationality adjacent communities would endorse becoming more religious, than they are now and would like to take a pill that would make them so in 24% causes. Yet the majority of respondents tend to endorse their personality changes from psychedelics.
Theistic proof from abiogenesis just passes the buck of improbability from abiogenesis to the existence of God that wills abiogenesis to happen.
Invoking many worlds here will do more harm than good. Next thing we will have theistic proof from many worlds.
The Good Regulator Theorem asserts that “every good regulator of a system must be a model of that system.” Therefore, the air conditioning system must have some heat-map (e.g. via thermostats) of the building (i.e. a model). Similarly, for an organism to maintain its existence, it must have a model of the system it is trying to sustain, i.e. a model of itself and its surrounding environment. This way, the organism can remain within a narrow set of favourable physical states (the organism’s attractor set) which allow it to stay alive.
This was insightful. I haven’t previously thought of my consciousness as being part of the same continuum as bunch of thermostats but in hindsight it’s very obvious.
I notice how my ability to conceive philosophical zombies has decreased even more.
First of all, notice how all the talk about predestination and fate doesn’t change anything in our decision making process.
- Your honour, I may have killed all these kids but, I was to do it due to the laws of the universe! It’s unfair to punish me!
- Be it as it may, but I’m to sentence you to life in prison due to these laws of the universe. It’s useless to nag about it.
- But I was predestined to nag about it, so it’s useless to ask me not to nag!
- And I’m fated to ask you to shut up, also we’ve already exceed 3 levels of recursion, so take this man to prison!
I think it’s a good sign that we are trying to answer the wrong question. Can we correct it?
If you’ve read the answer to free will, than you know how “couldness” is reduced in deterministic framework. When we make a decision, we mark some outcomes as primitively reachable from our current position. Feeding forward through the decision tree we mark more and more outcomes as reachable, due to their conditions being reachable and so on.
We can reduce “shouldness” in a similar way. We start from the state of the world that corresponds to our goals and values and mark its causes as leading to it. Feeding backward through the decision tree, till we reach out current position we get a chain of things that lead to our prefered world state.
These “couldness” and “shouldness” properties are essential part of our decision making algorithm, without which it wouldn’t be able to work; and are meaningful under determinism. They are real in the same sense our decision making is real. People tend to mix these decision-making-couldness and shouldness with metaphysical couldness and shouldness and that’s where lots of confusion comes from. But as soon as we understand that it’s two completely different things, it all becomes clear.
If we didn’t have decision-making-couldness and shouldness, then indeed it would be unreasonable to apply ethical categories like guilt or blame to us it would be similar to blaming a rock. Rock can’t make decisions—it doesn’t execute any decision making algorithm. However, we can still notice when a rock doesn’t satisfy our needs or causes something we don’t like.
But why not treating people behaviour the same way? Why do we need to talk about blame, or shouldness at all, when we can just talk about causes and effects? Because it captures our values and allows us to dramatically improve our decision making in regards to them, saving lots of computing power.
- Your honour, I may have killed all these kids, but their parents have caused their deaths too! If they hadn’t let their children play in this particular playground, I wouldn’t have killed them! If they hadn’t given birth to these children in the first place, noone could have have killed them at all! If anything, it’s their parents who should be judged here! They had much more opportunities to prevent the death of their children then I did!
- Be it as it may, but our society values the right of people to have children, as well as freedom of movement of the citizens. But it doesn’t value killing random children, for no reason. That’s why it’s considered a crime and you are guilty of it and that’s why I’m sentencing you to a life in prison.
You are doing a thing where you misunderstand me because you are filtering my statements through your beliefs.
Of course our inferential distance is huge. It’s no surprise. Neither do I feel that you are really trying to cross it. Also, considering that OP question was about determinism, it seems very appropriate that i talk from determinist position. However, as it happens, I’m indeed interested in better understanding indeterminism and libertarian free will. I accept that your usage is different but currently I don’t understand it so it doesn’t make any senjse to me. So I’m just trying to guess. I would be grateful if you explained it in great details.
Yes. So what? I have never claimed that real counteractals are the same thing as parallel worlds.
Then I don’t understand what you mean by real counterfactuals. In what sense are they real? If it’s just the fact that something could have happened differently, can you explain, what you mean by “could” here? Or by probability?
Actually as we are figuring out each-others definitions, what do you mean by logical counterfactuals? Their existence in the mind of the decision-maker or some kind of separate platonic realm of logic where they exist?
Do I understand correctly, that you claim that logical<real<many words regarding to freedom of will? How does this options affect my decision making algorithm?
I don’t. I have already stated that the argument is invalid.
I’m not thinking that you are. I’m giving the explanation why would I ask you such a question (as well as questions above). I have a detailed model how these things work out under determinism, but I lack such model for indeterminism. So I have to ask explanations for things which may look obvious for you, not in order to mess with you, but to improve my understanding.
I think you have been mentally translating “free will” into “feeling of freedom” because you believe in the doctrine that “free will just is the feeling of freedom”
I’ve stated very clear that humans have free and even explained in what sense. Have you been mentally translating my messages into “free will is just a feeling” because you believe that determinists believe that “free will just is the feeling of freedom”? But I suppose, such guessing won’t lead us anywhere. So just nevermind. You may replace “feeling of freedom” with “being free”—the questin is still open: How will I be more free if counterfactuals are “real”, whatever that means.
And of course the most mysterious question is still open as well:
If the event happened I can now see the actual outcome. Therefore I can determine the outcome (by seeing it), therefore the event is not indeterminable. Then how can anything happen under indeterminism?
There is no reason to believe immortality is possible.
Then replace immortality with any other scientific breakthrough which isn’t achieved yet, despite science existing for so many years, but which you believe to be possible in future. The argument stays the same.
Isn’t the current consensus that fitoestrogens from soy and grain do not affect male fertility or testosterone level?
What? Where did I say that?
You said that some people would feel more free if counterfactuals and probability, which are part of our decision making algorithm, existed somewhere outside of our mind.
I don’t know what you are referring to. Counterfactuals follow from indeterminism because indeterminism means an event could gave happened differently. It’s quite straightforward.
There seem to be a huge gap between “something could have happened differently” and “there actually exist a parallel universe where this indeed happened differently”. If we consider probability and counterfactuals to exist only on the map, it’s easy to cross this gap via using different equivalent interpretations of our uncertainty. But otherwise I don’t see how one follows from the other.
Why wouldn’t it happen? To me that looks like a circular argument: nothing can happen unless it is determined , so everything is determined. Determinism, therefore determinism.
Because if the event happened I can now see the actual outcome. Therefore I can determine the outcome (by seeing it), therefore the event is not indeterminable. I agree that it’s an obvious tautology, that’s exactly the reason why I feel so confused trying to imagine an alternative.
Whether indeterminism based freewill makes sense is a separate question from whether indeterminism makes sense.
I agree. But that’s what my initial claim was about: libertarian free will not making sense.
Why haven’t we reduced qualia already, if reductionism is an old idea?
For the same reason why we haven’t yet developed means to be immortal. It requires lots of actual scientific work in the direction that philosophy’s showed us. Philosophical ground work may have been done, but scientific is not yet. That’s exactly what I’ve said in the first comment.
I don’t see how what you explained is more than decision making. As soon as we understand that probabilities are part of the map, not the territory, it’s clear that causing the future is exactly the same thing as influencing it in a way that makes future A more likely than future B. I also do not understand why would anyone feel more free if their decision making algorithm existed outside of their mind and how it’s even possible in theory.Another thing that I have troubles understanding, is how the objective existence of counterfactuals follows from, or even is compatible with indeterminism. When we model some event with multiple outcomes we can either perceive it as random in one world, or determinable in many worlds, where each world correspond to each outcome. But you claim that it won’t be enough for libertarians. For some reasons they need both at the same time?
The most confusing thing for me, is the whole idea of objectively indeterminable event. If it’s indeterminable even for the universe itself, how can it happen at all in this universe? I can think of some justification via our universe being an interventionist simulation, but this will just pass the buck to the universe from which the intervention is performed.
I can definitely determine what my decision is. I do it every time when I make one. And I do it via my decision making algorithm, which can be executed in this universe, specifically on my brain. This requires quite a lot of determinism. And I don’t see how it can make sense if my decisions can’t be determined. And if someones defenition of free will requires decisions to be undeterminable, I claim that such definition doesn’t make any sense.
The Hard Problem is the problem of fishing a physical reduction of qualia, so saying “duh, find a physical reduction of qualia” isn’t some novel solution that no one thought of before.
What does novelty have to do with anything here? Anyway understanding that we have already dealt with similar problems before, can give some valuable insights.
I mostly agree.
I like this. Free will is the feeling when you don’t know the causes of your thoughts and actions.
It’s definetely a huge part of the puzzle. But not all of it. Free will is also a feeling of not knowing the choices you will make in the future. And the process of determining this choice due to all the causes.
Suppose Omega perfectly knows all the prior causes of my decisions, it has my source code and all the inputs. Omega would still have to run the source code with these inputs, to actually execute my decision making algorithm, so that it can determine my actions. But nevertheless my actions are determined by my decision making algorithm. This part of free will is completely real.
(Map vs territory distinction, essentially. Free will exists on the map, not in the territory. It is not an illusion, in the sense that it is actually there on the map; without perfect self-knowledge it can’t be otherwise.)
Yes! But with a caveat. This state of not knowing which action will actually be executed seems to be essential for the work of the decision making algorithm. Options need to be marked as reachable so that our tree search found the best one. Also the destinction between map and the territiry becomes fuzzy when the territory is our map making engine. Our decision making algorithm is embedded in our brain in this sense our freedom of will is more than just part of the map.
I haven’t read Harris’s book, but my guess would be that he takes appropriate care not to sound like he does more than describing the world he sees.
I had originally expected exactly that from the book! But, in my opinion, it didn’t turn out to be the case. I’m pretty sure that Harris could have done it if he intended to. My guess is that he wanted to be more relatable and appealing to a layman reader rather than polish his speech too much.
I assume by determinists you’ve meant the so called “boring view of reality” something in a cluster of causal determinism and reductive materialism. I seem to fit quite good in there, so here is my take:
Humans have free will in a sense of decision making, planning, achieving our goals and shaping the future the way we would like it to be. It (spoiler alert) requires causal determinism or, at least, quite a lot of it. It does not require indeterminism or the existence of conterfactual worlds outside of ones mind at all. Libertarian definition of free will doesn’t seem to make any sense.
All the philosophical groundwork for solving the hard problem of consciousness is already done. We can understand in principle how apparently-different-in-kind entities can be reducible to one thing. Now it’s a scientific problem to figure out the exact reduction. Qualia are phisical.
I’ve been genuinely confused about all these antropics stuff and read your sequence in hope for some answers. Now I understand better what are SSA and SIA. Yet am not even closer to understanding why would anyone take these theories seriously. They often don’t converge to normality, they depend on weird a priori reasoning which doesn’t resemble the way cognition engines produce accurate maps of the territory.SSA and SIA work only in those cases where their base asumptions are true. And in different circumstances and formulations of mind experiments different assumptions would be true. Then why, or why, for the sake of rationality are we expecting to have universal theory/referent class for every possible case? Why do we face this false dilemma, which ludicrous bullet to bite? Can we rather not?Here is a naive idea for a supperior antropics theory. We update on antropic evidience only if both SSA and SIA agree that we are to. That saves us from all the presumptious cases. That prevents us from having precognitive, telekenetic and any other psychic powers to blackmail reality, while allowing us to update on God’s equal numbers coin toss scenarios.
I’m pretty sure there are better approaches, I’ve heard of lots of good stuff about UDT, but haven’t yet dived deep enough inside it. I’ve found some intuitively compelling aproaches to antropics on LW. Than why do we even consider SSA or SIA? Why are people amused by Grabby Aliens or Doomsday Argument in 2021?
I really empathize with being troubled by such questions. I was amused by them a decade or so ago and I’ve found a way to actually make peace with them before I discovered Less Wrong, which in turn gave me so crucial insights, allowing to solve these enigmas to my own satisfaction.
The way I originally made peace with these questions was through embracing the doubts rather than running from them. To, as you put it, “surrender to radical skepticism” Suppose that the questions are indeed unsolvable. That there is no ultimate justification, that everything is doubtful, that no absolute truth can ground our knowledge. Why would that be bad? How would we navigate in such a world?
The first impulse may be to fall for the fallacy of gray. It’s understandable. But notice that some things are still easier to doubt than the others. You may doubt in you sensory inputs and you whole reasoning process. Allow it to yourself. Try it for a while and notice how much harder it is than to doubt the existence of invisible pink unicorn. There is no rule that compell you to doubt so hard in some specific cases but not the others. If such rule existed it would be so easy to doubt it. And notice that when you approach everything with the same level of doubt it all adds up to normality.
The questions aren’t answered yet. Why is it easier for me to doubt in X than in Y? But no more they are torturous, when you try to ground your knowledge in doubt rather than in certanity. Why did you think that absolute certanity is necessary in the first place? Isn’t this idea really weird? How would it even work?