Indeed, I agree with you 100% that EY could have convinced Pat to raise his probability if he had used better arguments (though 10% might still be a bit too high). But some things about your comment are weird to me. Why do you say “dispatch” instead of “reach agreement”? Why say “[EY is] being too nice” instead of “EY is defending his point poorly”? From my point of view, Pat is being reasonable and is merely missing some information that EY is failing to provide. From EY’s point of view, Pat is doing something fundamentally wrong. Your comment is defending my point with its content, but it’s phrased as though it defended EY’s.
zulupineapple
Currenlty I consider Yudkowsky, Scott Alexander, and Nick Bostrom to be three of the most important people.
Most important in what sense? Please don’t be a cultist.
How do you evaluate P(sun will rise tomorrow) then?
Suppose I’m the counterfactual oracle. To every question I answer with K. Eventually Alice reads K, no matter how frequent E is. Then I get maximal reward. Am I missing something? Is the paper assuming that the oracle is incapable of long term planning?
But why is that a reasonable assumption to make? Aren’t you just assuming that the AI will play nice? I can see that there are some dangerous Oracles that we can protect from using your strategy, but there are also many that it wouldn’t hinder at all.
The premise seems to be that there is no model, you’re seeing the sun for the first time. Presumably there are also no starts, planets, moons in the sky, and no telescopes or other tools that would help you build a decent cosmological model.
In that situation you may still realize that there is one thing rotating around another and deduce that P(sunrise) = 1-P(apocalypse). Unless you happen to live in the Arctic, or your planet is rotating in some weird ways, or it’s moving in a weird orbit, or etc.
My point is that estimating P(sunrise) is not trivial, the number can’t just be pulled out of the air. I don’t see anything better than Laplace rule, at least initially. You said it doesn’t work, so I’m asking you, what does work?
Mathematics is a social activity in the same way politics is a social activity. As in, it’s an activity which is social, or at least predicated on some sort of society.
Are you sayig that nothing a hermit would ever do can be called mathematics? That doesn’t seem right.
That’s a pretty low bar. Is wiping your ass a social activity too? Because, presumably, your mom taught you how to do it, and the fact you’re doing it with paper is strongly influenced by earlier ass wiper’s choices.
But never mind that. Suppose the hermit never learned any math, not even addition. Will you say that his math would still be social, because he already knew the words “zero”, “one”, “two”, which hint at the set of naturals? Then suppose that the hermit has not seen a human since the day he was born, was raised by wolves, developed his own language from zero, and then described some theory in that (indeed, this hermit might be the greatest genius who ever lived). Surely that’s not social. But is it not math?
Unlike grounded intuitions, an ungrounded one may be such that it’s never modified by new information. This doesn’t describe all ungrounded intuitions, but it describes the ones we’re interested in.
I think my first intuition of “set” was modified by observing Russell’s paradox.
This is a fine introduction to constructive logic. And, indeed, I suspect that constructive logic could be popular in this community, if it was better known.
Still, I don’t really understand what the purpose of this series is. Your first post made some bold claims regarding what mathematics is (and the title hints at something like that too), which I don’t think were sufficiently explained, and it’s strange to see none of that in part 2. Was that central to your goal or just a curiosity? Are we coming back to that in later parts?
And does “mathematics” mean “constructive logic”? You criticise people’s narrow view of foundation, but aren’t you just replacing that with a different but equally narrow view? I think some philosophical discussion is required.
Do unicorns exist? It seems to me that your arguments are fully general. You can, in fact, make true statements about unicorns (“every unicorn has a horn”) and perhaps some of them might not even seem trivial. It’s just that numbers are more precise, so we can make more claims about them, and more concise, so we can assume that my numbers and your numbers are the same.
Do you see some difference between saying “numbers exist” and “I think/feel that numbers exist”? I sure don’t.
Regarding unicorns, how do your arguments support their non-existence? I’m seeing the opposite. I think with your arguments every idea and concept could be said to exist.
If I think of this as “numbers exist,” then I’ll start asking question like “where did numbers come from?”
That’s not an experience I can relate to, but ok.
And once you understand where your belief comes from, I think you actually end up caring less about whether numbers “exist” or not
I see where you’re coming from, however I’m a big believer in the concept that words should mean things. If you find the word “exist” too vague for your purposes, you should propose a more precise definition, or use a different word.
Anyhow, the key thing from this post that doesn’t apply to unicorns is that there’s no experience of having separate things cause our hypotheses and our updates about unicorns.
I’m saying that there is. For now, instead of unicorns, consider god. There is the entire field of theology focused on reasoning about god, creating hypotheses about it and finding them wrong. But hopefully we don’t feel that god exists (or if we do feel it, that’s not thanks to theology). Or consider the Star Wars universe. Likewise there are many fans who reason what belongs to this universe and what does not, and where there is reason, there is a chance to find our hypotheses wrong. The same is true for every idea, it’s only that unicorns are degenerate—the reasoning is too trivial to find yourself wrong. But if we were morons, perhaps we’d find the hypothesis “unicorns have one horn” to be novel and profound.
This might shock you, but I think you’re one of the button-people. You’re asked about “minimum wage” and, without thinking, the most defensible claim you know comes out of your mouth (by defensible, I mean either easy to prove or hard to falsify). But why? Surely, you rationally understand that social connections have value, and that talking to people is a social activity. Yet your behaviors don’t reflect that understanding.
Seriously though, I find your dichotomy quite bad. It’s true that some people worry about the consistency of their beliefs more than others. That’s because enforcing consistency takes effort. You think that your mind works in distinctly different ways, while in reality you’re merely wasting your efforts on unimportant things that most people know not to bother with. That’s probably because you’ve put yourself in the “rational” social group, and that’s just what “rational” people do.
Another issue is that “family values” and not helping our brother don’t need to contradict each other. It sounds like you build trivial models of people, observe the models fail and deduce that no reasonable models exist. This is not to say that people never contradict themselves, of course. And I’m willing to imagine that you’re talking about a real person, whose circumstances and values are well known to you, and who truly are contradicting themselves. However the text does not suggest this convincingly.
I would like to question whether the intuitive concept of complexity makes sense. In what ways is human civilization more complex than a puddle of water? Aliens watching us through their telescopes may observe our effects on the planet and find them fairly simple and predictable in terms of the ways we affect the atmosphere as we climb our tech tree, or even in terms of how we will affect the solar system in the sci-fi future. At the same time, a scientist looking at a dozen molecules of water might find their movement highly complex and explainable by beautiful abstractions. I propose that the real reason we find people more complex than water is because we happen to care about people.
That seems to be a rather general response that doesn’t feel very relevant to my point. Anyway, if you agree that the human intuition of complexity depends on “provincial interests” I was trying to point out, then you should also agree that OP doesn’t reflect those interests in his complexity measure, right?
Also, some concepts are more natural than others. If we agree that the intuitive complexity is not very natural, we may still want to model it for some purposes, but it also makes sense to abandon it in favor of a more natural concept.
This behavior is also what you would expect to happen if different people had different preferences. The quiet friend might worry about being too loud, because they would actually prefer to be even quieter. You could say that they’re projecting their own preferences on you, which is an error, but a slightly different one. And I admit that this doesn’t fit all of your examples.
Another issue. Why is there “loudest” in the title? Why would this only happen with loud alarms? Surely, minor alarms can be warped through the same mechanisms, right?
The mugger claiming that they can affect a googolplexplex lives doesn’t give them exclusive access to a non-zero probability of affecting a googolplexplex lives; other ways do exist.
Why do you think that? What is the probability that the mugger does in fact have exclusive access to 3^^^^3 lives? And what is the probability for 3^^^^^3 lives?
By the way, what happens if a billion independent muggers all mug you for 1 dollar, one after another?
Is Pat poorly calibrated though? I don’t think he is. I don’t see anything in the text to suggest otherwise. If you’re going to criticize Pat’s decision process, I would hope the first argument to be “It doesn’t work well”. If it does, in fact, work, then maybe you’re the one with flawed reasoning.
The arguments why Aumann’s agreement theorem doesn’t apply need a lot more work. It’s a pretty big claim.
The whole “hero license” thing is indeed ad hominem. Pat is not demanding to see your hero license, Pat is predicting your future performance based on past performance.
It’s weird to agree so much with a straw man that you wrote yourself. I’m willing to assume that I’m missing the point completely, and I’d be happy if someone could clear it up for me.