“Everyone on this site obviously has an interest in being, on a personal level, more rational.”
Not in my experience. In fact, I was downvoted and harshly criticized for expressing confusion at gwern posting on this site and yet having no apparent interest in being rational.
“Instead of generalizing situation-specific behavior to personality (i.e. “Oh, he’s not trying to make me feel stupid, that’s just how he talks”), people assume that personality-specific behavior is situational (i.e. “he’s talking like that just to confuse me”).”
Those aren’t really mutually exclusive. “Talking like that just to confuse his listeners is just how he talks”. It could be an attribution not of any specific malice, but generalized snootiness.
This may seem pedantic, but given that this post is on the importance of precision:
“Some likely died.”
“Likely, some died”.
Also, I think you should more clearly distinguish between the two means, such as saying “sample average” rather than “your average”. Or use x bar and mu.
The whole concept of confidence is rather problematic, because it’s on the one hand one of the most common statistical measures presented to the public, but on the other hand it’s one of the most difficult concepts to understand.
What makes the concept of CI so hard to explain is that pretty every time the public is presented with it, they are presented with one particular confidence interval, and then given the 95%, but the 95% is not a property of the particular confidence interval, it’s a property of the process that generated it. The public understands “95% confidence interval” as being an interval that has a 95% chance of containing the true mean, but actually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.
By how many orders of magnitude? Would you play Russian Roulette for $10/day? It seemed to me that implicit in your argument was that even if someone disagrees with you about the expected value, an order of magnitude or so wouldn’t invalidate it. There’s a rather narrow set of circumstances where your argument doesn’t apply to your own situation. Simply asserting that you will sign up soon is far from sufficient. And note that many conditions necessitate further conditions; for instance, if you claim that your current utility/dollar ratio is ten times what it will be in a year, then you’d better not have turned down any loans with APY less than 900%.
And how does the value of cryonics go up as your mortality rate does? Are you planning on enrolling in a program with a fixed monthly fee?
“Also there are important risks that we are in simulation, but that it is created not by our possible ancestors”
Do you mean “descendants”?
What about after the program, if you don’t get a job, or don’t get a job in the data science field?
1% of a bad bet is still a bad bet.o
They should have some statistics, even if they’re not completely conclusive.
As I understand it, the costs are:
$1400 for lodging (commuting would cost even more)
$2500 deposit (not clear on the refund policy)
10% of next year’s income (with deposit going towards this)
I wouldn’t characterize that as “very little”. It’s enough to warrant asking a lot of questions.
How would you characterize the help you got getting a job? Getting an interview? Knowing what to say in an interview? Having verifiable skills?
Are your finances so dire that if someone offered you $1/day in exchange for playing Russian Roulette, you would accept? If not, aren’t you being just as irrational as you are accusing those who fail to accept your argument of being?
You might want to consider what the objective is, and whether you should have different resources for different objectives. Someone who’s in a deeply religious community who would be ostracized if people found out they’re an atheist would need different resources than someone in a more secular environment who simply wants to find other atheists to socialize with.
I think I should also mention your posting a URL but not making it clickable You should put anchors in your site. For instance, there should at the very least be anchors at “New atheists”, “Theists”, and ““Old” Atheists”, and links to the anchors when you first list those three categories, if not an outline at the beginning and links to the parts. Organizationally, it’s a bit of a mess; for instance, the “Communities of Atheists.” heading isn’t set out from the rest of the text at all.
“Just a “Survival Guide for Atheists” ”
Are you referring to the one by Hehmant Mehta?
I suppose this might be better place to ask than trying to resurrect a previous thread:
What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don’t necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at least partially resistant to cherry picking/massaging, etc.) Basically, what sort of evidence E does Signal have to offer, such that I should update towards it being effective, given both E, and “E has been selected by Signal, and Signal has an interest in choosing E to be as flattering rather than as informative as possible” are true?
Also, the last I heard, there was a deposit requirement. What’s the refund policy on that?
“We’re planning another one in Berkeley from May 2nd – July 24th.”
Is that June 24th?
Isn’t that fraud? That is, if you work for a company that matches donations, and I ask to give you money for you to give to MIRI, aren’t I asking you to defraud your company?
It does mean that not-scams should find ways to signal that they aren’t scams, and the fact that something does not signal not-scam is itself strong evidence of scam.
Isn’t the whole concept of matching donations a bit irrational to begin with? If a company thinks that MIRI is a good cause, they should give money to MIRI. If they think that potential employees will be motivated by them giving money to MIRI, wouldn’t a naive application of economics predict that employees would value a salary increase of a particular amount at a utility that is equal or greater than the utility of that particular amount being donated to MIRI? An employee can convert a $1000 salary increase to a $1000 MIRI donation, but not the reverse. Either the company is being irrational, or it is expecting its employees to be irrational.
Shouldn’t we first determine whether the amount of effort needed to figure out the costs of the tests is less than the expected value of ((cost of doing tests—expected gain)|(cost of doing tests > expected gain))?
And if this is presented as some sort of “competition” to see whether LW is less susceptible than the general populace, then if anyone has fallen for it, that can further discourage them from reporting it. A lot of this is exploiting the banking system’s lack of transparency as to just how “final” a transaction is; for instance, if you deposit a check, your account may be credited even if the check hasn’t actually cleared. So scammers take advantage of the fact that most people are familiar with all the intricacies of banking, and think that when their account has been credited, it’s safe to send money back.
It is somewhat confusing, but remember that srujectivity is defined with respect to a particular codomain; a function is surjective if its range is equal to its codomain, and thus whether it’s surjective depends on what its codomain is considered to be; every function maps its domain onto its range. “f maps X onto Y” means that f is surjective with respect to Y”. So, for instance, the exponential function maps the real numbers onto the positive real numbers. It’s surjective with respect to positive real numbers*. Saying “the exponential function maps real numbers onto real numbers” would not be correct, because it’s not surjective with respect to the entire set of real numbers. So saying that a one-to-one function maps distinct elements onto a set of distinct elements can be considered to be correct, albeit not as clear as saying “to” rather than “onto”. It also suffer from a lack of clarity in that it’s not clear what the “always” is supposed to range over; are there functions that sometimes do map distinct elements to distinct elements, but sometimes don’t?
So, we have
We don’t have both “Either K or A” and “Either Q or A”
Therefore, we either have “Neither K nor A” or “Neither Q nor A”
Since both of the possibilities involve “no A”, there can be no A.
Your post seems to be a rather verbose way of showing something that can be shown in three lines. I guess you’re trying to illustrate some larger framework, but it’s rather unclear what it is or how it adds anything to the analysis, and you haven’t given the reader much reason to look into it further.
The reason that someone might think an Ace would be a good choice is that they misread it as saying “one of these two statements is true”. But it is nowhere stated that either statement is true; rather it is stated that at least one statement is false. Once one notices that the Ace is involved in both of these statements, of which one has to be false, one’s intuition should lead one choosing the King.
Also, if you’re using set notation, (K ∪ A) indicates the same thing as (A or K or K ∩ A).