No, because that’s a meaningless claim about external reality. The only meaningful claims in this context are predictions.
“Do you expect to see chaos, or a well formed world like you recall seeing in the past, and why?”
The latter. Ultimately that gets grounded in Occam’s razor and Solomonoff induction making the latter simpler.
I’ve spent a lot of time and written a handful of posts (including one on the interaction between Solomonoff and SIA) building my ontology. Parts may be mistaken but I don’t believe it’s “confused”. Tabooing core concepts will just make it more tedious to explain, probably with no real benefit.
In particular, the only actual observations anyone has are of the form “I have observed X”, and that needs to be the input into Solomonoff. You can’t input a bird’s eye view because you don’t have one.
Anyway, it seems weird that being altruistic affects the agent’s decision as to a purely local bet. You end up with the same answer as me on that question, acting “as if” the probability was 90%, but in a convoluted manner.
Maybe you should taboo probability. What does it mean to say that the probability is 50%, if not that you’ll accept purely local bets with better odds and not worse odds? The only purpose of probability in my ontology is for predictions for betting purposes (or decision making purposes that map onto that). Maybe it is your notion of probability that is confused.
A couple of things.
If you’re ok with time inconsistent probabilities then you can be dutch booked.
I think of identity in terms of expectations. Right before you go to sleep, you have a rational subjective expectation of “waking up” with any number from 1-20 with a 5% probability.
It’s not clear how the utility function in your first case says to accept the bet given that you have the probability as 50⁄50. You can’t be maximizing utility, have that probability, and accept the bet—that’s just not what maximizes probability under those assumptions.
My version of the bet shouldn’t depend on if you care about other agents or not, because the bet doesn’t affect other agents.
You can start with Bostrom’s book on anthropic bias. https://www.anthropic-principle.com/q=book/table_of_contents/
The bet is just each agent is independently offered a 1:3 deal. There’s no dependence as in EYs post.
You’re just rejecting one of the premises here, and not coming close to dissolving the strong intuitions / arguments many people have for SIA. If you insist the probability is 50⁄50 you run into paradoxes anyway (if each agent is offered a 1:3 odds bet, they would reject it if they believe the probability is 50%, but you would want in advance for agents seeing green to take the bet.)
Yes, rejecting probability and refusing to make predictions about the future is just wrong here, no matter how many fancy primitives you put together.
I disagree that standard LW rejects that, though.
Variance only increases chance of Yes here. If cases spike and we’re averaging over 100k, reporting errors won’t matter. If we’ve averaging 75k, a state dumping extra cases could plausibly push it over 100k
Two Moderna doses here with no significant side effects
I know what successful communication looks like.
What does successful representation look like?
Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.
I haven’t tried to get a fully general account of communication but I’m aware there’s been plenty of philosophical work, and I can see partial accounts that work well enough.
I’m communicating, which I don’t have a fully general account of, but is something I can do and has relatively predictable effects on my experiences.
Not at all, to the extent head is a territory.
What does it mean for a model to “represent” a territory?
>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.
That’s just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that’s a useful thing to have in a model. But to then say “other people have subjective experiences somewhere ‘out there’ in external reality” seems meaningless—you’re just asserting your model is “real”, which is a category error in my view.
My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism and the post it links back to.
It seems that if external reality is meaningless, then it’s difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.
But, provided you speak about this notion, why would verificationismism lead to external world anti-realism?
Anti-realism is not quite correct here, it’s more that claims about external reality are meaningless as opposed to false.
One could argue that synthetic statements aren’t really about external reality: What we really mean is “If I were to check, my experiences would be as if there were a tree in what would seem to be my garden”. Then our ordinary language wouldn’t be meaningless. But this would be a highly revisionary proposal. We arguably don’t mean to say something like the above. We plausibly simply mean to assert the existence of a real tree in a real garden.
I’m not making a claim about what people actually mean by the words they say. I’m saying that some interpretations of what people say happen to lack meaning. I agree that many people fervently believe in some form of external reality, I simply think that belief is meaningless, in the same way that a belief about where the electron “truly is” is meaningless.
I granted your supposition of such things existing. I myself don’t believe any objective external reality exists, as I don’t think those are meaningful concepts.
Perhaps. It’s not clear to me how such facts could exist, or what claims about them mean.
If you’ve got self locating uncertainty, though, you can’t have objective facts about what atoms near you are doing.