I’ve been genuinely confused about all these antropics stuff and read your sequence in hope for some answers. Now I understand better what are SSA and SIA. Yet am not even closer to understanding why would anyone take these theories seriously. They often don’t converge to normality, they depend on weird a priori reasoning which doesn’t resemble the way cognition engines produce accurate maps of the territory.
SSA and SIA work only in those cases where their base asumptions are true. And in different circumstances and formulations of mind experiments different assumptions would be true. Then why, or why, for the sake of rationality are we expecting to have universal theory/referent class for every possible case? Why do we face this false dilemma, which ludicrous bullet to bite? Can we rather not?
Here is a naive idea for a supperior antropics theory. We update on antropic evidience only if both SSA and SIA agree that we are to. That saves us from all the presumptious cases. That prevents us from having precognitive, telekenetic and any other psychic powers to blackmail reality, while allowing us to update on God’s equal numbers coin toss scenarios.
I’m pretty sure there are better approaches, I’ve heard of lots of good stuff about UDT, but haven’t yet dived deep enough inside it. I’ve found some intuitively compelling aproaches to antropics on LW. Than why do we even consider SSA or SIA? Why are people amused by Grabby Aliens or Doomsday Argument in 2021?
I really empathize with being troubled by such questions. I was amused by them a decade or so ago and I’ve found a way to actually make peace with them before I discovered Less Wrong, which in turn gave me so crucial insights, allowing to solve these enigmas to my own satisfaction.
The way I originally made peace with these questions was through embracing the doubts rather than running from them. To, as you put it, “surrender to radical skepticism” Suppose that the questions are indeed unsolvable. That there is no ultimate justification, that everything is doubtful, that no absolute truth can ground our knowledge. Why would that be bad? How would we navigate in such a world?
The first impulse may be to fall for the fallacy of gray. It’s understandable. But notice that some things are still easier to doubt than the others. You may doubt in you sensory inputs and you whole reasoning process. Allow it to yourself. Try it for a while and notice how much harder it is than to doubt the existence of invisible pink unicorn. There is no rule that compell you to doubt so hard in some specific cases but not the others. If such rule existed it would be so easy to doubt it. And notice that when you approach everything with the same level of doubt it all adds up to normality.
The questions aren’t answered yet. Why is it easier for me to doubt in X than in Y? But no more they are torturous, when you try to ground your knowledge in doubt rather than in certanity. Why did you think that absolute certanity is necessary in the first place? Isn’t this idea really weird? How would it even work?