I agree. In fact, you could say that Mélenchon and le Pen are closer to each other on economic and possibly foreign policy, and very far from Macron. So not unreasonable that some votes would transfer from one to the other. Huge differences on everything else of course (immigration, but also law and order, education, culture, …)
I disagree on Hollande and generally center-left. Hollande had to juggle a very broad coalition as you say. He ended up hated by everyone because his way to handle it was not finding a middle ground, but campaigning as Mélenchon lite and then attempting to govern as Macron lite. Then he tried to dump the responsibility of the turnaround to financial markets and EU. After this, any possibility of a center-left coalition with an actual center-left agenda was dead and buried...
I think if you look up antifragile investment you find a lot of discussion of exactly this problem. As far as I understand, the idea is that most investments have limited downsides (at most, you lose what you put in) but may have limitless upsides in low-probability scenarios. Then you can make many small investments of this kind, so that when ones pays off, it’s more than enough to pay you back from the loss of the rest. Taking your example of the nuclear bunker, if you could build one with 1% of your wealth or less, in this frame of mind probably you should. Or less dramatically, invest a bit into any technology that could reach world-changing level even if it looks unlikely, plus buy a house as well as a cabin far away from any city. Learn a little bit of skills that could be useless or incredibly useful, depending on the future.
The answers you are suggesting are more related to safe/robust investment (I’m uncertain on the correct term), i.e. investment which should be useful whatever happens, but has a capped upside. Both maintain good health and buy a house count in this category. I’m not more qualified to give specific advice here than anyone else, but if you ask yourself “what would a standard prudent person do” you’re basically there.
Robust investment is what most people do in practice, and it’s probably a good thing because I think antifragile investment is too easy to screw up. But under some assumption on the nature of the high variance, which I think match the kind of future scenarios that you listed, well-thought antifragile strategies could be much better. At the very least, you can copy the idea of not putting all eggs in one basket, even if you don’t go all the way to make many small gambles in the expectation that at least one will pay off.
Interesting post! I like the picture you draw. But you should consider the possibility that it was not a Rome-unique factor, but the intersection of multiple things of which each one was true for multiple ancient states, but all of them only for Rome. In particular I have the impression that the subjects of the Persian empire were pretty happy with it and flourishing under its rule. To be clear, it was nothing like citizenship, because Persia was a kingdom and not a republican city-state. But between the investment model and the pillaging model that you mention, it was closer to the first. At some point, the Jews thought Cyrus was the Messiah! And in some ways, it’s not a surprise that the second-greatest empire of western antiquity had some things in common with the first.
I like to think it in this way: the determinant is the product of the eigenvalues of a matrix, which you can conveniently compute without reducing the matrix to diagonal form. All interesting properties of the determinant are very easy (and often trivial!) to show for the product of the eigenvalues.
More in the spirit of your post, I don’t remember how hard it is to show that the determinant is invariant under unitary transformation, but not too hard I think. It’s not the only invariant of course (the trace is as well, I don’t remember if there are others). But you could definitely start from the product of eigenvalues idea and make it invariant to get the formula for det.
Interesting read, but I don’t think the initial example and the following are very much connected. The shift of opinion about ww2 has presumably happened without fabricated evidence or misinformation about factual events. USSR and USA played a very different role in the defeat of Germany, so asking “which contributed the most” is sensitive to shifting narratives and highlighting of different events. Similar questions from more distant past: who was to blame for ww1? Was Napoleon spreading modernity and equality in Europe, or ruthlessly subjugating neighbors? Were the middle ages a dark age? Was the western Roman empire brought down by barbarians, or mainly by other factors? For all these you can have different answers without fabricated evidence, just by shifting some facts forward and neglecting others. That’s not to say that having tamper-proof historical sources is not important, just that it’s not sufficient. And personally I think most manipulation happens at the broader narrative level (in the past and in the present).
Or more generally, X sends a costly signal of his belief in P. If X is the state (as in example 2) a bet is probably impractical, but doing anything that would be costly if X is false should work.
But for this, it makes a big difference in what sense Y does not trust X. If Y thinks X may deceive, costly signals are good. If Y thinks X is stupid or irrational or similar, showing belief in P is useless.
I mostly agree with the other commenters that the story does not show the qualitative changes we may expect to see from autonomous weapons. But I found it a very good short story nevertheless, and believable as well.
I think it could serve well if broadly diffused, by getting someone to think about the topic for the first time before going into scenarios farther away from what they are used to.
I notice that while a lot of the answer is formal and well-grounded, “stories have the minimum level of internal complexity to explain the complex phenomena we experience” is itself a story :)
Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking “why x? …and why y?”. So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding.
Obviously, this is also a story, so discount it accordingly...
I agree it would be very good, and possibly an economic no-brainer. My point is just that what is discussed in the post works for a political no-brainer, by which I mean something that no one would bother to oppose. To get what you want you need a real political campaign, or a large scale economic education campaign. Even then it’s difficult, imo, unless your proposals fit one of the cases I mention above.
That said, of you are thinking of the US there is an easy proposal to be done for medicine, which is making medical school equivalent to a college degree and eliminating the requirement of having already done college before to enter (see https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/, which notes it’s done that way in Europe, I add it’s the same for law school etc.). It’s not an earth-shaking reform but it could work exactly for that reason.
The problem is, licensed people have made an investment and expect to repay it by reaping profits from the protected market. Some have borrowed money to get in and may have to file for personal bankruptcy. So they will oppose the reform by any means at their disposal, for which I don’t blame them (even if it is obviously against the general interest).
Such a reform would be doable in the following cases (1) it compensates the losers in some way (2) it’s so gradual that current licensed will mostly retire before it’s fully implemented (3) it is decided by a political faction that has no interest in the votes of the licensed and no sympathy for their concerns, while the licensed have no “hard power” to block the reform (and this third will never be fulfilled for a blanket effort on all licenses: in practice you get a party punching down on the least powerful people in the opponent’s coalition).
As you see, it’s a whole other order of complication with respect to the case presented in the post...
they managed to have almost the same GNP as France while keeping larger military spending, it’s not surprising that they won the war
of course, it may be surprising that they managed to get there. Given the model, you would expect that they sacrificed internal stability, but in fact it was France that was the most unstable country in that period! (Revolution, Napoleon, restoration, second Republic, second empire)
you could say the political instability may have really hindered France, forcing higher consumption spending, but how comes this was true only post-napoleon?
back to Prussia: it is the case that they never needed to maintain superiority over Austria and France over a long period. Nearly everyone in Germany wanted to unify, the question was how/under whom. The Prussians in 1970 needed few quick victories to convince everyone that they were the only choice. For this reason they could focus on the short term. After that, they absorbed the rest of Germany which had focused on economy. Compare the parallel unification of Italy under Piedmont/Sardinia, a much weaker power that played a similar strategy.
It’s not a coincidence that Hegel came up with the Zeitgeist idea exactly in 1800s Germany...
My overall take is that this is an useful starting point, and that structural factors are often underestimated, but the model is too simplified to actually make predictions with any confidence.
On effectiveness and public health studies: the thread quoted says multiple times “in the US”. I would be curious to know if this kind of things are done more elsewhere or it’s an implicit assumption that it could be done only in the US anyway (which could very well be true for what I know, drug profits are way higher in the US after all).
Does anybody know?
My feeling is that many of the people which did not benefit tend to “generalise from one example” and assume that’s true for most kids.
Actually, I (despite being generally pro-schooling) would say something stronger than you: there is a minority of people who are actually harmed by school compared to a reasonable counterfactual (e.g. home-schooling for some). Plus, many kids can see easily where the system is failing them, less easily where it’s working.
Thanks for the review!
Regarding the “countering racism” doubts, I can see how the results should disprove at least some racist worldviews.
I think that an interpretation of human history among racists is the following: the population splits in to clusters, these clusters diverge in different “races”, eventually one emerges as “the best” and out-competes or replaces all others, before splitting again. Historically, this view was used to justify aggressive expansionism, opposition to intermarriage, and opposition to any policy that could slow this process by helping races which were seen as lesser.
I think what he wants to say is that this picture is not supported by the genetic data, which shows instead population clusters which split and merge and split again among different lines, on a fairly fast timescale and without one population replacing the other (except arguably for the Neanderthals, but even then not completely). In other words, there’s no darwinian selection at the racial level, and there has almost never been.
According to my understanding (which comes from popularized sources, not I am not a doctor nor a biologist) antibody counts are not the main drivers of long-term immunity. Lasting immunity is given by memory T and B cells, which are able to quickly escalate the immune response in case of new infection, including producing new antibodies. So while high antibody count means you’re well protected, a low count some months after the vaccine could mean that the protection has reduced, but in almost all cases you will be protected for a much longer time. Note that low antibody count immediately after the vaccine would be different, but I don’t know if this even happens in people with an healthy immune system. Unfortunately there is no easy way to test how many memory T/B cells you have against a specific virus, without even going into how responsive they are.
So I think testing for antibodies before giving third doses would still result in giving the booster it to many more people than need it. Depending on how many doses you save, and on the costs of testing vs vaccinating, it may still be worth it. But it’s probably more practical at this time to give the booster to the people we expect have developed less memory cells, in other words the immunocompromised and maybe elderly people. For the others, I would simply wait to have more data, and ship the extra doses to poor countries.
For info, you can find most of the exercises in python (done by someone else than Ng) here. They are still not that useful: I watched the course videos a couple of years ago and I stopped doing the exercises very quickly.
I agree with you on both the praise and the complaints about the course. Besides it being very dated, I think that the main problem was that Ng was neither clear nor consistent about the goal. The videos are mostly an non-formal introduction to a range of machine learning techniques plus some in-depth discussion of broadly useful concepts and of common pitfalls for self-trained ML users. I found it delivered very well on that. But the exercises are mostly very simple implementations, which would maybe fit a more formal course. Using an already implemented package to understand hands-on overfitting, regularization etc. would be much more fitting to the course (no pun intended). At the same time, Ng kept repeating stuff like “at the end of the course you will know more than most ML engineers” which was a very transparent lie, but gave the impression that the course wanted to impart a working knowledge of ML, which was definitely not the case.
I don’t know how much this is a common problem with MOOCs. It seems easily fixable but the incentives might be against it happening (being unclear about the course, just as aiming for students with minimal background, can be useful in attracting more people). Like johnswentworth I had more luck with open course ware, with the caveat that sometimes very good courses build on other ones with are not available or have insufficient online material.
On this I agree with you. But the Darwin issue is a bit of a special case—the topic was politically/religiously charged, so it was important that a very respected figure was spearheading the idea. Wallace himself understood it, I think—he sent his research to Darwin instead of publishing it directly. But this is mostly independent of Darwin’s scientific genius (only mostly, because he gained that status with his previous work on less controversial topics).
On the whole, I agree with jbash and Gerald below—“geniuses” in the sense of very smart scientists surely exist, and all else equal they speed up scientific advancement. But they are not that above ordinary smart-ish people. Lack of geniuses is rarely the main bottleneck, so an hypothetical science with less geniuses but more productive average-smarts researchers would probably advance faster if less glamorously.
You could make a parallel between geniuses in science and heroes in war: heroic soldiers are good to have, but in the end wars are won by the side with more resources and better strategies. This does not stop warring nations to make a big deal of heroic exploits, but it’s done to improve morale mostly.
What you say is even more true than you think. We would have had “relativity” in 1906, if you are satisfied with an experimentally indistinguishable theory which kept the ether as a conventional choice (a degree of difference similar to the one between interpretations of quantum mechanics). Poincaré had already submitted a paper in 1905 before seeing Einstein’s, building on Lorentz’s previous work. Now, Einstein’s theory is preferable for several reasons, but ultimately the difference is small.
If you look you find similar stories for Newton, Mendeleev, obviously Darwin, and others. There are some counterexamples, but ultimately we should take Newton seriously: the height of the shoulders you stand on is more important than your own for determining how far you can see.
Maybe zero-sum was not the right expression, because I think it is broader than strictly zero-sum games. I meant winner-takes-most situations, where the reward of the best performer is outsized with respect to the reward of the next-best. This does not necessarily mean that the game is strictly zero-sum. In many cases, it is just that the product you deliver is scalable, so everyone will just want the best product (of course, preferences may mean that the ranking is not the same for everyone).
I am also convinced that all the things you mentioned have a fat tail, even if they don’t follow strictly a Pareto distribution (probably books/records will be the most close to Pareto, salaries the most close to a Gaussian but with a fat tail on the right). But I think this does not reflect the distribution of quality/skill but the characteristics of the markets.
Example: book sales. I like fantasy books, but the number of books I read per year is capped. So there are a few authors I follow, plus maybe once per year I look for reviews and check if some good book by other authors has come out. If a certain book I would read is not released, chances are I would read the next best one, and find that in fact it is not much worse. Of course, books of much better/worse quality would convince me to read more/less, but in practice the quality delivered by different authors is close enough that this is a relatively small effect. If everyone had the same taste in books, and everyone read 10 books per year, we would all be reading the same 10. If an outstanding book came out, book number 10 would pass from one billion sales to zero. Of course, this is way oversimplified: we have different tastes, and the interaction of objective quality with subjective tastes, plus other factors, creates a Pareto-like distribution of sales.
Example 2: tech companies. In most western countries, Google has a market share which is 10x Bing. It’s not that Google is 10x better than Bing. If people used Bing, they would maybe waste 10% extra time to get to the result they want. But that’s fairly consistent across different people. So Google is like a runner which is 10% faster and wins 90% of races. This is not true for all companies, but for most of the largest ones rely on mechanisms which create winner-takes-most situations (IP, brand recognition, network effects, economies of scale). That’s why you have a fat tail in wealth created by entrepreneurs (IMHO).
To go back to research. Scientific breakthroughs are not a limited resource, it’s true. But given the area of expertise of a researcher and the state of the art in the field, the most promising research topics are limited. And there are many researchers going into those topics. The first to find even a partial solution will easily get published on a fast track. The others will get published but much extra work will be required: compare with previous results, fight referees which favor other approaches, show extra rigor in the analysis… All this will lower their apparent productivity. Or, if you are not confident, you can take a less promising topic: you have less risk but your expected productivity goes down anyway. To this, add that better researchers get access to better complements: more funding, more and better collaborators, maybe less teaching responsibilities if you are in academia. All this widens the productivity gap between the best and the not-so-worse. Funding is particularly perverse because it’s partially awarded on past results without dividing by money spent to obtain them, so good/lucky researchers enter into a cycle of more results → more funding → even more results → even more funding …
In general, I think fat tails in outcomes are present everywhere because they come out naturally from the interaction of incentive structures (e.g. markets, IP, funding), economies of scale, and network effects. But they don’t need to reflect an underlying distribution of abilities. I obviously cannot prove that they never do, but I my standard assumption is that they don’t. (You could say that I have a prior that ability is distributed in a Gaussian way given that as far as I know all human characteristics that are directly measurable on an absolute scale look more Gaussian-like than Pareto-like)
I think there is a crucial difference between performance, as defined in the paper, and ability which should be taken very much into account. I will not debate if their definition of performance is consistent or not with the common usage, but they failed to state their definitions clearly and I think you misunderstood their results because of this.
The paper measures performance as the results of (roughly) zero-sum competitions. This is very clear when they analyze athletes (number of wins), politicians (election wins, re-elections) and actors (awards). But this is also true for research, as writing an impactful paper means arriving at a novel result before competing teams or succeeding at explaining something where other have failed.
But, for a professional runner, winning 90% of races is not the same as being 90% faster. Indeed, a runner who is on average 5% faster will win most races (not all, as he will have off days where his speed goes down by more than 5%).
Tests such as PISA and grades try to measure ability, e.g. your math skill. That is analogous to a runner’s speed, not to how many races he wins. I believe this is very much Gaussian distributed, and the paper does not show anything to the contrary. Indeed it is very reasonable to believe that Gaussian distributed abilities result in Pareto distributed outcomes in competitive situations (it may be a provable result but I’m too lazy to do the math now). So, it’s pretty much appropriate to give grades on a Gaussian.
Now, we could debate if productivity comes mostly from exceptional performers in the real world, which might result in similar reform ideas. BTW, that’s something I mostly don’t believe but it’s a tenable position on a very complicated issue.