Something is missing in this explanation. Why isn’t everyone super rich?
It’s a huge difference whether the reviewer is some anonymous person unrelated to the journal or whether it’s an editor in chief of the journal itself. I don’t think it’s appropriate to call the latter peer-review (there are no “peers” involved), but that’s not important.
Editor in chief has a strong motivation to have a good quality journal. If he rejects a good article, it’s his loss. On the contrary, anonymous peer have stronger motivation to use this as an opportunity to promote (get cited) his own research than to help journal curate the best science.
Let me try to rephrase the shift I see in science. Over the 20th century, science became bureaucraciesed, the process of “doing science” was largely formalised and standardised. Researchers obsess about impact factors, p-values, h-indexes, anonymous peer reviews, grants, currents...
There are actual rules in place that determine formally whether you are “good” scientist. That wasn’t the case over the most of the history of the science.
Also the “full-time” scientist who never did any other job than academy research was much less common in the past. Take Einstein as an example.
Royal Society in 1660 and current academia are very different beasts. For example the current citations/journal’s game is pretty new phenomenon. Peer-review wasn’t really a thing 100 years ago. Neither complex grant applications.
Academia in the current form isn’t Lindy. It’s not like we’re doing this thousands of years. Current system of Academia is at most 70 years old.
Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn’t you be in a much better position doing quality research if you’re earning good salary, saving a big portion of it, and doing science as a hobby?
Some important things can be a source of income, such as farming. Farming is pretty important and there are no huge issues with farmers doing it for profit.
Problems happen when there is a huge disconnect between the value and reward. This happens in a basic research a lot, because researchers don’t have any direct customers.
Arguably, in a basic research, you principally can’t have any customers. Your customers are future researchers that will build on top of your research. They would be able to decide whether your work was valuable or whether it was crap, but you’d be pretty old or dead by that time.
Very nice. Few notes:
1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.
2. World isn’t black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.
3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.
4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.
5. There is lot of truth in Taleb’s position that research should not be a source of your income, rather a hobby.
Yeah, I think you’re right. There are two types of explanations:
those which compress information
those which provides us with faster algorithms to reason about the world
The three-body systems is the example of the latter. As is lots of math and computer science.
Good property of scientific theory is that it serves as a data compression. Les bits you need to explain the world around you, the better theory. This is IMO very good definition of what explanation is.
Also, the compression usually is lossy, such as Newtonian mechanics.
True, but that’s usually very artificial context. Often when someone claims they know the probabilities accurately enough, they are mistaken or lying.
There is one other explanations for the results of those experiments.
In a real world, it’s quite uncommon that somebody tells you exact probabilities—no you need to infer them from the situation around you. And we the people, we pretty much suck at assigning numeric values to probabilities. When I say 99%, it probably means something like 90%. When I say 90%, I’d guess 70% corresponds to that.
But that doesn’t mean that people behave irrationally. If you view the proposed scenarios through the described lens, it’s more like:
a) Certainty of million or ~60% chance on getting 5 millions.
b) Slightly higher probability of getting a million but the difference is much smaller than the actual error in the estimation of probabilities themselves.
With this in mind, the actual behaviour of people makes much more sense.
And what about this argument:
As the civilisation progresses, it becomes increasingly cheaper to destroy the world to the point where any lunatic can do so. It might be so that physical laws make it much harder to protect against destruction than actually destroy—this actually seems to be the case with nuclear weapons.
Certainly, currently there are at least 1 in million people in this world who would choose to destroy it all if they could.
It might be so that we achieve this level of knowledge before we make it to travel across solar systems.
Very simple. To prove it for arbitrary number of values, you just need to prove that h_i being true increases its expected “probability to be assigned” after measurement for each i.
If you define T as h_i and F as NOT h_i, you just reduced the problem to two values version.
There is actually much easier and intuitive proof.
For simplicity, let’s assume H takes only two values T(true) and F(false).
Now, let’s assume that God know that H = T, but observer (me) doesn’t know it. If I now make measurement of some dependent variable D with value d_i, I’all either:
1. Update my probability of T upwards if d_i is more probable under T than in general.
2. Update my probability of T downwards if d_i is less probable under T than in general.
3. Don’t change my probability of T at all if d_is is same as in general.
(In general here means without the knowledge whether T or F happened, i.e. assuming prior probabilities of observer)
Law of conservation of expected evidence tells us that in general (assuming prior probabilities), expected change in assigned probability for T is 0. However, if H=T, than those events that update probability of T upwards are more likely under T than in general, and those which update probability of T downwards are less likely. Thus expected change in assigned probability for T > 0 if T is true.