He’s not that libertarian in the political sense, though probably more than either of us.
countingtoten
They had arguments about physics that the OP weirdly downplays. Like I said below: Copernicus disliked the equant because it contradicted the most straightforward reading of Ptolemy’s own physics; Kepler unambiguously disproved scholastic physics. Also, Galileo discovered Galilean relativity. He definitely made enough observations to show this last idea had something to it, unlike the scholastic explanation of heavenly bodies.
I want to pursue this slightly. Before recent evidence—which caused me to update in a vague way towards shorter timelines—my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim “5-10 years to strong AI” and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you “confident” the way people were certain (~80%) Hillary Clinton would win?
OP seems like a good argument for the weak claim you apply to your own field, but then goes off the rails. For now I’ll note two points that seem definitely wrong.
1:
Bayesian accounts of epistemology seem to go haywire if we think one should have a credence in Bayesian epistemology itself,
On a practical level this just seems false. On an abstract level probability doesn’t deal with uncertainty about mathematical questions; but MIRI and others have made progress on this very issue. I think true modesty would lead you to see such issues as eminently solvable. (This is around the point where you seem to stop arguing for the standard you apply to yourself, on questions you care about, and start making more sweeping claims.)
I peripherally note that if you reject the notion of a degree of credence justified by your assumptions and evidence, you suddenly have a problem explaining what your thesis even means and why (by your lights) anyone should care. But I don’t think you actually do reject it (and you haven’t expressly questioned any other assumptions of Cox’s Theorem or the strengthened versions thereof).
2:
(e.g. the agreement of the U.S. and German governments with the implied view of the physicists). This is a lot more involved, but the expected ‘accuracy yield per unit time spent’ may still be greater than (for example) making a careful study of the relevant physics.
This is partly an artifact of the example, but I do not think a layman at the time could get any useful information at all by your method—not without getting shot. Also, you forgot to include a timeframe in the question. This makes theoretical arguments much more relevant then usual (see also: cryonics). It doesn’t take much study of physics to realize that a large positively-charged atomic nucleus could, in principle, fly apart. Knowing what that would mean takes more science, but Special Relativity was already decades old.
I assume you believe you’re awake because you’ve tried to levitate, or tested the behavior of written words, or used some other more-or-less reliable test?
So, if people want more social status, then their behavior in your narrative feels obviously wrong to me. Choosing that behavior feels like it would encourage others to slap their own efforts down. In practice, maybe few people share my decision procedure and I ‘should’ slap other status-seekers in order to make room for myself (though the latter doesn’t strictly follow.) But even if that’s true, I don’t think it informs my instinctive reaction. (I do pity physics cranks who don’t inconvenience me personally or harm anything I care about. That loss meme always slightly horrified me, though I admit I don’t know the guy’s comic well.)
Are you arguing that most people don’t seek increased status, or that they don’t think this way?
I get that we tend to overestimate our suffering/work relative to that of others, but that doesn’t automatically make us hate everyone who wants another dollar in their bank account. Does it?
Another puzzling feature of your diagnosis: if most people treat status as a resource like money, then why wouldn’t they try to award it for service to their tribe? That feels like a natural compromise between status-seekers and those who want to stay big fish in some small pond. The alternative described in the OP seems, well, obviously cultish. It suggests a pond in which big fish claim divine right to rule (as opposed to eg claiming their rule benefits all fish) and everyone goes along with this for some reason.
I don’t see it. Maybe you think fox epistemology wouldn’t donate to MIRI, which is presumably what Eliezer cares about? But what he claims repeatedly is that we should judge situations just as you say, and he offers a way to do this.
Um, you just refuted a crackpot claim on the object level, using the kind of common-sense argument that I (a layman) heard from a physics teacher in high school. ETA: This may illustrate a problem with the neat, bright-line categories you’re assuming.
On a similar note: I remember a speech given by a young-Earth creationist that I think differs from lesser crankdom mainly in being more developed. As the lie aged it needed to birth more lies in response to the real world of entangled truths. And while I couldn’t refute everything the guy said—that’s the point of a Gish Gallop—I knew a cat probably couldn’t be a vegetarian.
No, seriously, what you’re saying sounds like nonsese. Number one, dreams can have vivid stimuli that I recall explicitly using as evidence that I wasn’t dreaming; of course I’ve also thought I was performing mundane tasks. Number two, how does dream-you distinguish the difference without having “tested the behavior of written words, or used some other more-or-less reliable test?”
The part about sensory data sounds totally wrong to me personally, and of course you know where this is going (see also). Whereas my dream self can, in fact, notice logical flaws or different physics and conclude that I’m dreaming.
That’s actually not quite right—my dream *content* varies widely in how mundane it is. My point is that I learned to recognize dreams not by practicing the thought ‘This experience is too vivid to be a dream,’ but by practicing tests which seemed likely to work.
Like many people in the past year, I frequently wonder if I’m dreaming while awake. This seems to make up >10% of the times I’ve tested it. I’m also running out of ways to say that I mean what I say.
You may be right that the vast majority of the time (meaningful cough) when humans wonder if they’re dreaming, they are. People who know that may account for nearly all exceptions.
I don’t think horrible people would have disliked Kurt Godel?
If horrible people like you, that does usually mean you aren’t doing enough for the people they hate.
When I saw the title, I thought, ‘But we want to decompose problems in FAI theory to isolate questions we can answer. This suggests heavy use of black boxes.’ I wondered if perhaps he was trying to help people who were getting everything wrong (in which case I think a positive suggestion has more chance of helping than telling people what to avoid). I was pleased to see the post actually addressed a more intelligent perspective, and has much less to do with your point or mine.
I mean, physically assaulting anyone is a crime; so the OP arguably violates one of these existing rules. This is definitely true (technically) if he suggested doing anything like that with newcomers to a LW meetup unless they specifically say not to. While we likely want a looser approach to enforcement (compared to a zero-tolerance policy that would ban Duncan) it sounds to me like you should tell him not to do it again.
You mean to say that deliberate anti-epistemology, which combines dehumanization with anthropomorphism, turns out to be bad?
Really, no link to orthonormal’s sequence?
I think you haven’t zeroed in on the point of the Mary’s Room argument. According to this argument, when Mary exclaims, “So that’s what red looks like!” she is really pointing to a non-verbal belief she was previously incapable of forming. (I don’t mean the probability of her statement, but the real claim to which she attached the probability.) So it won’t convince any philosophers if you talk about mAIry setting a preexisting Boolean.
Of course this argument fails to touch physicalism—some version of mAIry could just form a new memory and acquire effective certainty of the new claim that “Red looks like {memories of red},” a claim which mAIry was previously incapable of even formulating. (Note that eg this claim could be made false by altering her memories or showing her a green apple while telling her “Humans call this color ‘red’.” The claim is clearly meaningful, though a more carefully formulated version might be tautological.) However, the OP as written doesn’t quite touch Mary’s Room.
The first potential problem I see is that “information available” should be relative to an information-storage device like a human brain, whereas time in (my limited understanding of) physics is relative to a rock or other physical frame of reference. Those seem different.
If we try to remove that problem then we get a new one (which might exist anyway in a less acute form). When we take as our “given phenomenon” something large in spatial area, like ‘the Earth at exactly 4:40 am EST in this frame of reference,’ we find vastly more available information than we could have—even in principle, I would think—for many phenomena we would consider to take more time. So this definition doesn’t seem to match the word.
One is phrased or presented as knowledge. I don’t know the best way to approach this, but to a first approximation the belief is the one that has an explicit probability attached. I know you talked about a Boolean, but there the precise claim given a Boolean value was “these changes have happened”, described as an outside observer would, and in my example the claim is closer to just being the changes.
Your example could be brought closer by having mAIry predict the pattern of activation, create pointers to memories that have not yet been formed, and thus formulate the claim, “Purple looks like n<sub>p</sub>.” Here she has knowledge beforehand, but the specific claim under examination is incomplete or undefined because that node doesn’t exist.
By completely ignoring physics until Galileo, you paint a deceptive picture.
In Aristotle’s physics, each god inspired a different but equally regular and circular motion in the heavens.
Copernicus objected to the equant because it was not a regular circular motion. It just modified another circle, which seems like an obvious contradiction. If we treat it as a motion added to the system, it would be something like motion along a (rotating) radius. The planet would go back and forth in a straight line that happens to produce a modified circle. Now, we could imagine that all of these circles are conceptual rather than being actual motions added together. We could say that the deities involved compel the actual motion of the planet in its (single) crystal sphere to act as if influenced by other, imaginary circles. But that would seem to require a more active role for the deities, leading to awkward questions. That seems like the major reason why people called Copernicus more coherent and elegant.
Kepler—as you point out and then ignore—showed that all previous sytems gave false predictions, and you could get true ones (according to the observations of the time) by using ellipses. That was the end of the Church’s Artistotelian physics. At that point, their model of the heavens and physics in general was provably wrong.
Notice what Tycho Brahe’s system doesn’t have? Guess what was also missing from the chief attempt to defend Brahe against Galileo. Abandoning Aristotle’s physics of perfect circles would have removed most of the actual reason for thinking the heavens and the Earth followed different rules to begin with.