Fellow males, please learn to read body language
How would one go about doing this? It would be useful, but I don’t know where to start.
Fellow males, please learn to read body language
How would one go about doing this? It would be useful, but I don’t know where to start.
With library books, I think the concern is more about wear-and-tear on shared property. Some of us leakily generalize this to “folding page corners is bad”, even for non-shared books. When it’s your own book, you can do whatever you want.
Personally I find folded page corners less effective than bookmarks for quickly finding my place, especially if I’ve folded many other page corners, which makes the currently-folded one less visually obvious. But perhaps I’d learn to be better at that if I used it regularly.
For practical purposes, sure, this is a case where “absence of evidence is evidence of absence” is not a very useful refrain. The evidence is so weak that it’s a waste of time to think about it. P(I see a tiger in my trashcan|Tigers exist) is very small, and not much higher than P(I see [hallucinate] a tiger in my trashcan|Tigers don’t exist). A very small adjustment to P(Tigers exist), of which you already have very high confidence, isn’t worth keeping track of… unless maybe you’re systematically searching the world for tigers, by examining small regions one at a time, each no more likely to contain a tiger than your own trashcan. Then you really would want to keep track of that very small amount of evidence: if you round it down to no evidence at all, then even after searching the whole world, you’d still have no evidence about tigers!
It’s not fully accurate to say
Only provided you have looked, and looked in the right place.
but it might be a useful heuristic. “Be mindful of the strength of evidence, not just its presence” would be more precise, because looking in the right place does provide a much higher likelihood ratio than not looking at all.
Suppose the chance of finding a tiger somewhere in a given household, on a given day, is one in a billion. Or so say the pro-tigerians. The tiger denialist faction, of course, claims that statistic is made-up, and tigers don’t actually exist. But one household in a trillion might hallucinate a tiger, on any given day.
Today, you search your entire house—the dishwasher AND the fridge AND the trashcan etc.
P(You find a tiger|tigers exist) = .000000001
P(You don’t find a tiger|tigers don’t exist) = .000000000001
P(You don’t find a tiger|tigers exist) = .999999999
P(You don’t find a tiger|tigers don’t exist) = .999999999999
And suppose you are 99.9% confident that tigers exist—you think you could make statements like that a thousand times in a row, and be wrong only once. (Perhaps rattling off all the animals you know.) Your prior odds ratio is 999 to 1. So you take your prior odds, (.999/.001) and multiply by the likelihood ratio, (.999999999/.999999999999), to get a posterior odds ratio of 998.999999002 to 1. This is, clearly, a VERY small adjustment.
What if you search more households: how many would you have to search, without finding a tiger, before you dropped just to 90% confidence in tigers, where you still think tigers exist but would not willingly bet your life on it? If I’ve done the math right, about five billion. There probably aren’t that many households in the world, so searching every house would be insufficient to get you down to just 90% confidence, much less 10% or whatever threshold you’d like to use for “tigers probably don’t exist”.
(And my one-in-a-billion figure is probably far too high, and so searching every household in the world should get you even less adjustment...)
But if you could search a trillion houses at those odds, and still never found a tiger—then you’d be insane to still claim that tigers probably do exist.
And if a trillion searches can produce such a shift, then each individual search can’t produce no evidence. Just very little.
I agree that the “looking” part is important: Looking and not finding evidence is a different kind of “absence of evidence” than just not looking.
Well, let’s take this example as given but change one little thing. Let’s say I’m not looking for tigers—instead, I heard that there are two big rocks, Phobos and Deimos, and I’m looking for evidence of their existence.
I search a house and I don’t find them. I search 5 billion houses and I don’t find them. I search a trillion houses and still don’t find them. At this point would I be insane to believe Phobos and Deimos exist?
I think it would indeed be pretty silly to maintain that a) they exist and b) each house has an independent 10^-9 chance of containing them, after searching a trillion houses and finding neither. But if you didn’t place much credence in anything like b) in the first place, your confidence in a) may not be meaningfully altered. If you already thought Phobos and Deimos were moons of Mars, then you would have extremely minimal evidence against their existence. But again, we can construct a Paradox of the Heap-type setup where you search the solar system, one household-volume at a time, and if all of them come up empty you should end up thinking Phobos and Deimos probably aren’t real, so each individual household-volume must be some degree of evidence.
My thought here—and perhaps we agree on this, in which case I’m happy to concede the point—is that the need to look in the right place is technically already covered by the relevant math, specifically by the different strengths of evidence. But for us puny humans that are doing this without explicit numerical estimates, and who aren’t well-calibrated to nine significant figures, it’s a good rule of thumb.
(This comment has been edited multiple times. My apologies for any confusion.)
Meaningfully? I thought we were counting infintesimals :-D
As in “for most practical purposes, and with human computational abilities, this is no update at all”. I’m not sure we can usefully say this isn’t really evidence after all, or we run into Paradox of the Heap problems.
When you’re unsure about the existence of something, your idea of what exactly that something is can be fuzzy and that affects what kind of evidence you’ll accept and where will you look for it.
Let me give an example where I think “absence of evidence is evidence of absence” is applicable, even though I’m not sure anyone has ever looked in the right place: Bigfoot.
Bigfoot moves around. It is possible that all of our searches happen to have missed it, like the one-volume-at-a-time search mentioned above.
We don’t really know much about Bigfoot, so it’s hard to be sure if we’ve been looking in the right place. Nor are we quite sure what we’re looking for.
And any individual hike through the woods has a very, very small chance of encountering Bigfoot, even if it does exist, so any looking that has happened by accident won’t be especially rigorous.
Nevertheless, if Bigfoot DID exist, we would expect there to be some good photographs by now. No individual instance of not finding evidence for Bigfoot is particularly significant, but all of the searches combined failing to produce any good evidence for Bigfoot makes me reasonably confident that Bigfoot doesn’t exist, and every year of continued non-findings would drive that down a little more, if I cared enough to keep track.
Similar reasoning is useful for, say, UFOs and the power of prayer. In both cases, it is plausible that none of our evidence is really “looking in the right place” (because aliens might have arbitrarily good evasion capabilities [although beware of Giant Cheesecake Fallacy], because s/he who demands a miracle shall not receive one and running a study on prayer is like demanding a miracle, etc), but the dearth of positive evidence is pretty important evidence of absence, and justifies low confidence in those claims until/unless some strong positive evidence shows up.
The second half of the sentence was the reason I was bringing it up in this context. We’ve looked, kinda, and not very systematically, and maybe not in the right places, but haven’t found any evidence. Is it fair to call this evidence against paranormal claims?
Trying to not get sidetracked into that specific sub-discussion: should you be skeptical of any given paranormal claim (specific or general), if some people have tried but nobody has been able to produce clear evidence for it? “Clear evidence” here meaning “better evidence than we would expect if the claim is false”, per the Bayesian definition of evidence.
Should you be more or less skeptical than upon first hearing the claim, but before examining the evidence about it?
I think I’m not getting why you object to “AoE is EoA”, if appending ”...but sometimes it’s so weak that we humans can’t actually make use of it” doesn’t resolve the disagreement in much the same way that ”...but only provided you have looked, and looked in the right place” does.
I am not sure that that means. Example: I claim that this coin is biased. I do a hundred coin flips, it comes up heads 55 times. Is this “clear evidence”?
Without crunching the numbers, my best guess is no, a fair coin is not very unlikely to come up heads 55 times out of 100. I would guess that no possible P(heads) would have a likelihood ratio much greater than 1 from that test.
If one of the hypotheses is that the coin is unfair in a way that causes it to always get exactly 55 heads in 100 flips, that might be clear/strong evidence, but this would require a different mechanism than usually implied when discussing coin flips.
Does it ever get strong enough for you to dismiss all claimed evidence of paranormal powers sight unseen? I don’t know—it depends on your prior and on how did you update. I expect different results with different people.
I don’t know either. This is a rather different question from whether you’re getting evidence at all, though.
No need for best guesses—this is a standard problem in statistics. What it boils down to is that there is a specific distribution of the number of heads that 100 tosses of a fair coin would produce. You look at this distribution, note where 55 heads are on it… and then? What is clear evidence? how high a probability number makes things “likely” or “unlikely”? It’s up to you to decide what level of certainty is acceptable to you.
The Bayesian approach, of course, sidesteps all this and just updates the belief. The downside is that the output you get is not a simple “likely” or “unlikely”, it’s a full distribution and it’s still up to you what to make out of it.
Right, it’s definitely not a hard problem to calculate directly; I specifically chose not to do so, because I don’t think you need to run the numbers here to know roughly what they’ll look like. Specifically, this test shouldn’t yield even a 2:1 likelihood ratio for any specific P(heads):fair coin, and it’s only one standard deviation from the mean. Either way, it doesn’t give us much confidence that the coin isn’t fair.
Asking what is clear evidence sounds to me like asking what is hot water; it’s a quantitative thing which we describe with qualitative words. 55 heads is not very clear; 56 would be a little clearer; 100 heads is much clearer, but still not perfectly so.
It is dangerous to be half a rationalist. This applies to groups as well as individuals. No matter how good your process for arriving at beliefs, it is indeed unethical to go around spreading those beliefs to people that will predictably misunderstand and misuse them.
In the interest of clarity: I am not at all sure how to proceed in this particular case. History makes me wary of departing from the current Schelling point of assuming everybody is equal, but that’s not my point.
I am saying that a course of action based on Bayesian reasoning has no special immunity to being ethically wrong, and it is those actual results that are worth worrying about, not merely the epistemology.
Nah, the wiki makes it much easier.
...but I don’t see how a victory for the AI party in such an experiment discredits the idea of boxed AI. It simply shows that boxes are not a 100% reliable safeguard. Do boxes foreclose on alternative safeguards that we can show to be more reliable?
The original claim under dispute, at least according to EY’s page, was that boxing an AI of unknown friendliness was, by itself, a viable approach to AI safety. Disregarding all the other ways such an AI might circumvent any “box”, the experiment purports to test the claim that something could simply talk its way out of the box—just to test that one point of failure, and with merely human intelligence.
Maybe the supposed original claim is a strawman or misrepresentation; I wasn’t involved in the original conversations, so I’m not sure. In any case, the experiment is intended to test/demonstrate that boxing alone is not sufficient, even given a perfect box which can only be opened with the Gatekeeper’s approval. Whether boxing is a useful-but-not-guaranteed safety procedure is a different question.
Do you believe Eliezer’s (or Tuxedage’s) wins were achieved by meeting the Gatekeeper’s standard for Friendliness, or some other method (e.g. psychological warfare, inducing and exploiting emotional states, etc)?
My impression has been that “boxing” is considered non-viable not just because it’s hard to tell if an AI is really truly Friendly, but because it won’t hold even an obvious UFAI that wants out.
Newton himself was a theist who attributed things falling down to God. Although he claimed “hypotheses non fingo” (“I make no hypotheses”, or possibly “I feign no hypotheses”) for why gravity actually works, he seemed unafraid of implying that it was in some way a function of the Holy Spirit. Still, I’m unaware of anyone attaching moral significance to gravity, whether before Newton or after.
Well, except Yvain, but that implication runs the other way!
LessWrong needs to deal with emotions as part of rationality.
Certainly.
Strangely, people are eager to upvote Julia Galef’s post on the importants of emotions in rationality, yet eager to downvote my attempt to deal with emotion and rationality on LessWrong.
Don’t spend >90% of your word count summarizing a novel next time.
The last paragraph was interesting, and at least some of the setup was required for this specific point, but it felt like a very low signal-to-”why am I reading all these excerpts from a seemingly-arbitrarily-selected 1992 novel” ratio.
Basically, I finished the article feeling like I had a pretty good idea what happened in the novel, but very little new insight into the combination of love and reason, or even what PhilGoetz thinks about it.
Apologies, I should clarify.
I don’t think a longish summary was inappropriate. I’m not even sure the specific amount of summary you used was inappropriate—if I were an editor, I’d have an eye out for parts which you could get away with trimming, but that’s just editing in general.
I DO think there was too little unpacking and exploration of your thesis. The 1800 words of summary aren’t the problem, it’s that 200 words of analysis is pretty sparse.
I read this months ago, but only yesterday finally got the reference.
Hello, Less Wrong! I’m Wes W., which username I’ve chosen as a compromise between anonymity and real-life-usability, since I do intend/hope to get involved in meatspace once my schedule permits.
I’ve been lurking here and working my way through the Sequences for a couple months now. I’m intentionally pacing myself, so I can process things sufficiently. (Also, it’s mildly alarming to finish reading a post and find that my brain has already vented all previous opinions on the topic and replaced them with the writer’s.) I don’t really know anymore how I found this site, because I’ve been aware of its existence for a couple years, but only recently realized both the full extent of the material here, and that I wanted to be involved in it.
I’ve been an atheist for several years, following another several years of diminishing faith in my native Mormonism, but it wasn’t until I started reading Eliezer that this felt like a good thing, rather than a loss.
I currently have a job as a math tutor, which I originally got as just a college summer job, but turned into an “oh, this is what I want to do with my life” thing, so I’m now working on becoming a teacher. So clarity of thought is especially helpful to me, since I have to know something backwards and forwards in my sleep before I can do much to help a student understand. Ideas like “guessing the teacher’s password” and “how could I regenerate this knowledge, if I lost it” have been directly useful to me, and I also hope to get better at overcoming akrasia.