Why do you write “Flaws in mainstream science”, if you mean specific parts of science only?
Some other mainstream areas have replication rates of more than 95%.
Why do you write “Flaws in mainstream science”, if you mean specific parts of science only?
Some other mainstream areas have replication rates of more than 95%.
Interesting article, thanks.
I agree with the general concept. I would be a bit more careful in the conclusions, however:
No visible correlation does not mean no causation—it is just a strong hint. In the specific example, the hint comes from a single parameter—the lack of significant correlation between internet & overweight when both exercise categories are added; together with the significant correlation of internet usage with the other two parameters.
With the proposed diagram, I get:
p(Internet)=.141
p(not Internet)=.859
p(Overweight)=.209
p(not Overweight)=.791
p(Ex|Int & Ov)=.10
p(Ex|Int & no OV)=.62
p(Ex|no Int & Ov)=.27
p(Ex|no Int & no Ov)=.85
This model has 6 free parameters—the insignificant correlation between overweight and internet is the only constraint. It is true that other models have to be more complex to explain data, but we know that our world is not a small toy simulation—there are causal connections everywhere, the question is just “are they negligible or not?”.
I can still make 100000 lottery predictions, and get a good score. I look for a system which you cannot trick in that way. Ok, for each prediction, you can subtract the average score from your score. That should work. Assuming that all other predictions are rational, too, you get an expectation of 0 difference in the lottery predictions.
I’ve only been making the forecast well-specified
I think “impact here (10% confidence), no impact at that place (90% confidence)” is quite specific. It is a binary event.
To calculate the Brier score, you used >your< assumption that meteorites have a 1 in a million chance to hit a specfic area. What about events without a natural way to get those assumptions?
Let’s use another example:
Assume that I predict that neither Obama nor Romney will be elected with 95% confidence. If that prediction becomes true, it is amazing and indicates a high predictive power (especially if I make multiple similar predictions and most of them become true).
Assume that I predict that either Obama or Romney will be elected with 95% confidence. If that prediction becomes true, it is not surprising.
Where is the difference? The second event is expected by others. How can we quantify “difference to expectations of others” and include it in the score? Maybe with an additional weight—weight each prediction with the difference from the expectations of others (as mean of the log ratio or something like that).
Interesting, thanks, but not exactly what I looked for. As an example, take a simplified lottery: 1 number is drawn out of 10. I can predict “number X will have a probability of 10%” 100 times in a row—this is correct, and will give a good score in all scoring rules. However, those predictions are not interesting.
If I make 100 predictions “a meteorite will hit position X tomorrow (10% confidence)” and 10% of them are correct, those predictions are very interesting—you would expect that I have some additional knowledge (for example, observed an approaching asteroid).
The difference between the examples is the quality of the predictions: Everybody can get correct (unbiased) 10%-predictions for the lottery, but getting enough evidence to make correct 10%-probabilities for asteroid impacts is hard—most predictions for those positions will be way lower.
brazil84 stated that there are just two options, so let’s stick to that example first.
“[rifle] no bullet will be find in or around the person’s body 0.01% of the time” is contradictory evidence against the rifle (and for the handgun). But “[handgun] no bullet will be find in or around the person’s body 0.001% of the time” is even stronger evidence against the handgun (and for the rifle). In total, we have some evidence for the rifle.
Now let’s add a .001%-probability that it was not a gunshot wound—in this case, the probability to find no bullet is (close to) 100%. Rifle gets an initial probability of 60% and handgun gets 40% (+ rounding error).
So let’s update: No gunshot: 0.001 → 0.001 Rifle: 60 → 0.006 Handgun: 40 → 0.0004
Of course, the probability that one of those 3 happened has to be 1 (counting all guns as “handgun” or “rifle”), so let’s convert that back to probabilities: 0.001+0.006+0.0004 = 0.0074 No gunshot: 0.001/0.0074=13.5% Rifle: 0.006/0.0074=81.1% Handgun: 0.0004/0.0074=5.4%
The rifle and handgun numbers increased the probability of a rifle shot, as the probability for “no gunshot” was very small. All numbers are our estimates, of course.
If either X or Y has to be true, you cannot have 20% for X and 35% for Y. The remaining 45% would be a contradiction (Neither X nor Y, but “X or Y”). While you can work with those numbers (20 and 35), they are not probabilities any more—they are relative probabilities.
It is very unlikely that the murderer won in the lottery. However, if a suspect did win in the lottery, this does not reduce the probability that he is guilty—he has the same (low) probability as all others.
That reminds me of a question about judging predictions: Is there any established method to say “x made n predictions, was underconfident / calibrated properly / overconfident and the quality of the predictions was z”? Assuming the predictions are given as “x will happen (y% confidence)”.
It is easy to make 1000 unbiased predictions about lottery drawings, but this does not mean you are good in making predictions.
Sorry, I was a bit unprecise. “You need texts without size limit” would be correct. The issue is: Your memory (and probably lifetime) is finite. Even if you convert the whole observable universe to your extended memory.
To get an infinite set of texts with a finite set of characters, you need texts of infinite length. I think it is similar for dreams—the set of possible experiences is finite, and dreams have a finite sequence of experiences.
The pool of possible dreams is so large that we will never hit any limit—and even if (which would require experienced lifetimes of 10^whatever years), we would have forgotten earlier dreams long ago.
Daydreaming? I think we should not take “dream” to literal here.
“Infinite” is problematic, indeed. I think there is just a finite number of dreams of finite length.
I did not call anything “true moral dilemma”.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
If you can do this, publish it.
I think this would be even more interesting as “pick at random, without an external source of randomness”. Sure you can get random numbers from random.org, your computer or the seconds on your watch (a nice idee), but those just blur the effect of mind-generated random numbers.
I agree with that interpretation. The 13636 murders contain:
1676 from strangers
5974 with some relation
*5986 unknown
Based on the known cases only, I get 22% strangers. More than expected, but it might depend on the region, too (US <--> Europe). Based on that table, we can do even better: We can exclude reasons which are known to be unrelated to the specific case, and persons/relations which are known to be innocent (or non-existent). A bit tricky, as the table is “relation murderer → victim” and not the other direction, but it should be possible.
Interesting posts.
However, I disagree with your prior by a significant amount. The probability that [person in group] commits a murder within one year is small, but so is the probability that [person in group] is in contact with a victim. I would begin with the event [murder has happened], assign a high probability (like ~90%) to “the murderer knew the victim”, and then distribute those 90% among people who knew her (and work with ratios afterwards). I am not familiar enough with the case to do that know, but Amanda would probably get something around 10%, before any evidence or (missing) motive is taken into account.
That would just cause them to pump chemicals in you head, I think. But it’s definitely thinking in the right direction.
As long as I am not aware of that (or do not dislike it)… well, why not. However, MugaSofer is right, the genie has to understand the (future) utility function for that. But if it can alter the future without restrictions, it can change the utility function itself (maybe even to an unbounded one… :D)
Immortal humans can go horribly wrong, unless “number of dying humans” is really what you want to minimize.
“Increase my utility as much as you can”?
Well, it is a necessary step to find other fruits.
If something applies to white horses only, I would write “white horses” instead of “horses”. Otherwise it might suggest (at least to some readers) that it applies to many, most or even all horses. It is not wrong, but it can be misleading.