they managed to have almost the same GNP as France while keeping larger military spending, it’s not surprising that they won the war
of course, it may be surprising that they managed to get there. Given the model, you would expect that they sacrificed internal stability, but in fact it was France that was the most unstable country in that period! (Revolution, Napoleon, restoration, second Republic, second empire)
you could say the political instability may have really hindered France, forcing higher consumption spending, but how comes this was true only post-napoleon?
back to Prussia: it is the case that they never needed to maintain superiority over Austria and France over a long period. Nearly everyone in Germany wanted to unify, the question was how/under whom. The Prussians in 1970 needed few quick victories to convince everyone that they were the only choice. For this reason they could focus on the short term. After that, they absorbed the rest of Germany which had focused on economy. Compare the parallel unification of Italy under Piedmont/Sardinia, a much weaker power that played a similar strategy.
It’s not a coincidence that Hegel came up with the Zeitgeist idea exactly in 1800s Germany...
My overall take is that this is an useful starting point, and that structural factors are often underestimated, but the model is too simplified to actually make predictions with any confidence.
On effectiveness and public health studies: the thread quoted says multiple times “in the US”. I would be curious to know if this kind of things are done more elsewhere or it’s an implicit assumption that it could be done only in the US anyway (which could very well be true for what I know, drug profits are way higher in the US after all).
Does anybody know?
My feeling is that many of the people which did not benefit tend to “generalise from one example” and assume that’s true for most kids.
Actually, I (despite being generally pro-schooling) would say something stronger than you: there is a minority of people who are actually harmed by school compared to a reasonable counterfactual (e.g. home-schooling for some). Plus, many kids can see easily where the system is failing them, less easily where it’s working.
Thanks for the review!
Regarding the “countering racism” doubts, I can see how the results should disprove at least some racist worldviews.
I think that an interpretation of human history among racists is the following: the population splits in to clusters, these clusters diverge in different “races”, eventually one emerges as “the best” and out-competes or replaces all others, before splitting again. Historically, this view was used to justify aggressive expansionism, opposition to intermarriage, and opposition to any policy that could slow this process by helping races which were seen as lesser.
I think what he wants to say is that this picture is not supported by the genetic data, which shows instead population clusters which split and merge and split again among different lines, on a fairly fast timescale and without one population replacing the other (except arguably for the Neanderthals, but even then not completely). In other words, there’s no darwinian selection at the racial level, and there has almost never been.
According to my understanding (which comes from popularized sources, not I am not a doctor nor a biologist) antibody counts are not the main drivers of long-term immunity. Lasting immunity is given by memory T and B cells, which are able to quickly escalate the immune response in case of new infection, including producing new antibodies. So while high antibody count means you’re well protected, a low count some months after the vaccine could mean that the protection has reduced, but in almost all cases you will be protected for a much longer time. Note that low antibody count immediately after the vaccine would be different, but I don’t know if this even happens in people with an healthy immune system. Unfortunately there is no easy way to test how many memory T/B cells you have against a specific virus, without even going into how responsive they are.
So I think testing for antibodies before giving third doses would still result in giving the booster it to many more people than need it. Depending on how many doses you save, and on the costs of testing vs vaccinating, it may still be worth it. But it’s probably more practical at this time to give the booster to the people we expect have developed less memory cells, in other words the immunocompromised and maybe elderly people. For the others, I would simply wait to have more data, and ship the extra doses to poor countries.
For info, you can find most of the exercises in python (done by someone else than Ng) here. They are still not that useful: I watched the course videos a couple of years ago and I stopped doing the exercises very quickly.
I agree with you on both the praise and the complaints about the course. Besides it being very dated, I think that the main problem was that Ng was neither clear nor consistent about the goal. The videos are mostly an non-formal introduction to a range of machine learning techniques plus some in-depth discussion of broadly useful concepts and of common pitfalls for self-trained ML users. I found it delivered very well on that. But the exercises are mostly very simple implementations, which would maybe fit a more formal course. Using an already implemented package to understand hands-on overfitting, regularization etc. would be much more fitting to the course (no pun intended). At the same time, Ng kept repeating stuff like “at the end of the course you will know more than most ML engineers” which was a very transparent lie, but gave the impression that the course wanted to impart a working knowledge of ML, which was definitely not the case.
I don’t know how much this is a common problem with MOOCs. It seems easily fixable but the incentives might be against it happening (being unclear about the course, just as aiming for students with minimal background, can be useful in attracting more people). Like johnswentworth I had more luck with open course ware, with the caveat that sometimes very good courses build on other ones with are not available or have insufficient online material.
On this I agree with you. But the Darwin issue is a bit of a special case—the topic was politically/religiously charged, so it was important that a very respected figure was spearheading the idea. Wallace himself understood it, I think—he sent his research to Darwin instead of publishing it directly. But this is mostly independent of Darwin’s scientific genius (only mostly, because he gained that status with his previous work on less controversial topics).
On the whole, I agree with jbash and Gerald below—“geniuses” in the sense of very smart scientists surely exist, and all else equal they speed up scientific advancement. But they are not that above ordinary smart-ish people. Lack of geniuses is rarely the main bottleneck, so an hypothetical science with less geniuses but more productive average-smarts researchers would probably advance faster if less glamorously.
You could make a parallel between geniuses in science and heroes in war: heroic soldiers are good to have, but in the end wars are won by the side with more resources and better strategies. This does not stop warring nations to make a big deal of heroic exploits, but it’s done to improve morale mostly.
What you say is even more true than you think. We would have had “relativity” in 1906, if you are satisfied with an experimentally indistinguishable theory which kept the ether as a conventional choice (a degree of difference similar to the one between interpretations of quantum mechanics). Poincaré had already submitted a paper in 1905 before seeing Einstein’s, building on Lorentz’s previous work. Now, Einstein’s theory is preferable for several reasons, but ultimately the difference is small.
If you look you find similar stories for Newton, Mendeleev, obviously Darwin, and others. There are some counterexamples, but ultimately we should take Newton seriously: the height of the shoulders you stand on is more important than your own for determining how far you can see.
Maybe zero-sum was not the right expression, because I think it is broader than strictly zero-sum games. I meant winner-takes-most situations, where the reward of the best performer is outsized with respect to the reward of the next-best. This does not necessarily mean that the game is strictly zero-sum. In many cases, it is just that the product you deliver is scalable, so everyone will just want the best product (of course, preferences may mean that the ranking is not the same for everyone).
I am also convinced that all the things you mentioned have a fat tail, even if they don’t follow strictly a Pareto distribution (probably books/records will be the most close to Pareto, salaries the most close to a Gaussian but with a fat tail on the right). But I think this does not reflect the distribution of quality/skill but the characteristics of the markets.
Example: book sales. I like fantasy books, but the number of books I read per year is capped. So there are a few authors I follow, plus maybe once per year I look for reviews and check if some good book by other authors has come out. If a certain book I would read is not released, chances are I would read the next best one, and find that in fact it is not much worse. Of course, books of much better/worse quality would convince me to read more/less, but in practice the quality delivered by different authors is close enough that this is a relatively small effect. If everyone had the same taste in books, and everyone read 10 books per year, we would all be reading the same 10. If an outstanding book came out, book number 10 would pass from one billion sales to zero. Of course, this is way oversimplified: we have different tastes, and the interaction of objective quality with subjective tastes, plus other factors, creates a Pareto-like distribution of sales.
Example 2: tech companies. In most western countries, Google has a market share which is 10x Bing. It’s not that Google is 10x better than Bing. If people used Bing, they would maybe waste 10% extra time to get to the result they want. But that’s fairly consistent across different people. So Google is like a runner which is 10% faster and wins 90% of races. This is not true for all companies, but for most of the largest ones rely on mechanisms which create winner-takes-most situations (IP, brand recognition, network effects, economies of scale). That’s why you have a fat tail in wealth created by entrepreneurs (IMHO).
To go back to research. Scientific breakthroughs are not a limited resource, it’s true. But given the area of expertise of a researcher and the state of the art in the field, the most promising research topics are limited. And there are many researchers going into those topics. The first to find even a partial solution will easily get published on a fast track. The others will get published but much extra work will be required: compare with previous results, fight referees which favor other approaches, show extra rigor in the analysis… All this will lower their apparent productivity. Or, if you are not confident, you can take a less promising topic: you have less risk but your expected productivity goes down anyway. To this, add that better researchers get access to better complements: more funding, more and better collaborators, maybe less teaching responsibilities if you are in academia. All this widens the productivity gap between the best and the not-so-worse. Funding is particularly perverse because it’s partially awarded on past results without dividing by money spent to obtain them, so good/lucky researchers enter into a cycle of more results → more funding → even more results → even more funding …
In general, I think fat tails in outcomes are present everywhere because they come out naturally from the interaction of incentive structures (e.g. markets, IP, funding), economies of scale, and network effects. But they don’t need to reflect an underlying distribution of abilities. I obviously cannot prove that they never do, but I my standard assumption is that they don’t. (You could say that I have a prior that ability is distributed in a Gaussian way given that as far as I know all human characteristics that are directly measurable on an absolute scale look more Gaussian-like than Pareto-like)
I think there is a crucial difference between performance, as defined in the paper, and ability which should be taken very much into account. I will not debate if their definition of performance is consistent or not with the common usage, but they failed to state their definitions clearly and I think you misunderstood their results because of this.
The paper measures performance as the results of (roughly) zero-sum competitions. This is very clear when they analyze athletes (number of wins), politicians (election wins, re-elections) and actors (awards). But this is also true for research, as writing an impactful paper means arriving at a novel result before competing teams or succeeding at explaining something where other have failed.
But, for a professional runner, winning 90% of races is not the same as being 90% faster. Indeed, a runner who is on average 5% faster will win most races (not all, as he will have off days where his speed goes down by more than 5%).
Tests such as PISA and grades try to measure ability, e.g. your math skill. That is analogous to a runner’s speed, not to how many races he wins. I believe this is very much Gaussian distributed, and the paper does not show anything to the contrary. Indeed it is very reasonable to believe that Gaussian distributed abilities result in Pareto distributed outcomes in competitive situations (it may be a provable result but I’m too lazy to do the math now). So, it’s pretty much appropriate to give grades on a Gaussian.
Now, we could debate if productivity comes mostly from exceptional performers in the real world, which might result in similar reform ideas. BTW, that’s something I mostly don’t believe but it’s a tenable position on a very complicated issue.
No I missed it, that’s great! I was only aware of phase I. It should be revised way up then.
No but all neighbors are, except Kosovo (and Bosnia that is on the track for NATO access). A new Serbia-Kosovo war (or Serbia-someone else) is in principle possible and as you say would not imply NATO breakdown. But US and EU have currently a strong grip on the region, the last war sent the message that they were willing to maintain it with force, and I think they have and will continue to have strong interest in no new war developing. And no country in the area should be suicidal enough to go against them. So I think the probability of open war there is very low, unless EU or NATO breakdown has already happened or is happening at the same time.
It is certainly possible but what kind of scenario are you thinking about?
For moving west of Ukraine the conflicts will have to involve EU or NATO countries, almost certainly both. So that would mean either an open Russia-NATO war or the total breakdown of both NATO and EU. Both scenarios would have huge consequences for the world as a whole, nearly as much as a war between China and US and allies.
I think he might be referring to the Simon–Ehrlich wager. And indeed there have been other similar claims in the past, more often proven wrong than correct.
You are right of course, and I am going by other people’s analysis so I am not sure how much they are correct or wrong this time around. I do not think we will have hugely rising commodity prices making green energy unfeasible, unless there is a war (or just a trade war) blocking the supply of a key input.
Nevertheless, the extrapolation of decreasing costs for solar and wind based on current trends will eventually hit some “hard” limit, and metals are a likely candidate. After all, as manufacturing costs for panels reduce, the fraction of cost coming from raw materials grows even at constant prices. And to get prices going down 10x, we need to supply several times more energy than now (maybe 5x?) meaning growing wind and solar by two orders of magnitude in 20 years. This could plausibly put strain on the supply of raw materials.
Of course, if the bottleneck will turn out to be energy distribution and storage, then we could get prices going down 10x at the source (what Daniel is interested in) but not for household consumption, and only a modest increase in demand.
Thanks for the clarifications! I realized that maybe you are mostly interested on the tech sector in the US and AI-related development, which explains also why you didn’t think of biomedical research immediately. Is this impression correct? If so, you might want to edit further the question to restrict the range of answers.
I fixed the link, I didn’t notice but it had taken the ) as part of the address.
BTW, I read your post on military tech in the meantime, it was interesting.
I think 10x decrease in energy prices is too much. My reasons are:
There are some constrains on solar/wind which are currently not binding, but will be by the time we have converted most energy production to green energy. The main ones are metals (see e.g. https://www.coppolacomment.com/2021/03/from-carbon-to-metals-renewable-energy.html) and land use (in India, China, Europe, Japan and a few other Asian countries especially as population density is high, but that’s most of the world population anyway). This of course does not consider the possibility of major technological breakthroughs in organic solar and energy conversion/transport, which may happen but are not guaranteed so I think they are out of the scope of your exercise.
As the cost of energy lowers, we will consume more. In poor countries especially, plus you mentioned increased consumption by supercomputers and AI. This will partially balance the cost of production falling, so my (uneducated) guess would be that a 2x-3x decrease in prices is a more reasonable expectation. An analysis by an expert could convince me otherwise.
Like rayom I also noticed you did not mention anything about biology and medicine. I think there will be some advances from that side. A malaria vaccine seems probable by 2040 (maybe ~80%?) and would be a big thing for large parts of the world. Also some improvement in cancer therapy seem to have relatively high probability (nothing even remotely like “cure all cancer”, to be clear). We might get some improvement for Alzheimer, dementia or other age-related illnesses, but my “business as usual” expectation is that only moderate advancements will be widely deployed by 2040. Nevertheless they might be sufficient to improve significantly the quality of life of elderly people in rich countries.
Interesting reading, although I wonder if there are alternative or complementary explanations—instead of direct cultural transmission, one could think of different economic paths due to different starting levels of industrialization, infrastructure, education etc., which then generate different cultural clusters. Culture will also influence the economy, of course, in a sort of co-evolution.
Btw, if you wanted to apply this to Italy (another Italian here!), I think you should not look at coalitions but single parties within them. The Austro-Hungarian regions corresponds pretty well to the Lega Nord heartlands, for example.
But this also allows me to give an example of the alternative explanation. The historic Lega Nord vote corresponds even better with the areas where the economy is dominated by small but dynamic family-run manufacturing (north Italy, and east of Milan). You can think of Lega, pre-2010, as representing conservative voters coming from this economic and cultural background. The reasons why that part of the country ended up with this specific economic model have, as far as I know, little to do with culture and more with the timing and conditions of industrialization. Which in turn, at least in part depended on the infrastructure, institutions and levels of human capital left over from the pre-italy period.
Edit: On a second thought, you could invoke cultural history and inertia to explain the differences between eastern Lombardy and Veneto on one side, and Emilia-Romagna and Tuscany on the other. Both areas followed similar economic paths as far as I know, but they belonged to different states pre-unification and they are culturally different in a way that is very clear in the polls.
Consider also that when a zero-sum game is embedded in a positive-sum one, often the most effective way to negotiate is to threaten to walk away from the positive-sum game if you don’t get a bigger share of the spoils (e.g. threaten to leave the job if you don’t get a rise). The simplified version is the ultimatum game: https://en.wikipedia.org/wiki/Ultimatum_game.
This also means that holding a positive-sum trade sacred has the side effect of freezing the zero-sum part of it to the status quo.
I think this happens to many scientists. I found myself in a similar situation once—we could not have done better at the time, but we could have noticed that the tools we used were not sufficient. Fortunately, by the time we noticed we had better tools and we found that the conclusions were still valid, even if some quantitative results were pretty inaccurate. As you, I wanted to submit an erratum, but my boss insisted to include the results in another related paper instead. I still feel that an erratum would have been better, but I think he was worried that the referee would be someone who disliked him for unrelated reasons.
If I understand correctly, your paper was about a new method and it turns out that the method itself is fatally flawed. However, what you report is what comes out of the approach, and there is a problem with the basic idea which was not trivial to see. The thing you feel guilty about is omitting the outlier from the results, but that you cannot fix anyway. Is this more or less it?
I don’t think you should retract. It understand the impulse if you feel there is no value left in the paper, but it seems to me that retraction is mostly done for misconduct, if the paper contains something that is factually wrong (you wrote you did X but actually you did Y), or if your results come out of not adopting established best practices.
An erratum would be good -because it’s linked to the paper- but in your case it may be difficult to write. On one hand, the whole paper is invalidated. On the other hand, there is no factual error to correct, and a lot of papers are based on ideas which looked good at the time but are found to be wrong by later literature. Most of them I would say :) Scientists are not systematically publishing erratums when they recognize their proposed method was not as good as hoped.
In the end, I think your boss suggestion of making another paper may be the best in your case. You would be discussing why method X, which seems like a good idea, does not work after better analysis, the subject of hundreds of papers every year. The fact that you are the ones that proposed method X is no problem. If someone gets the idea of using that approach, given that it’s 15 years old they will check more recent literature I hope. Nowadays it’s easy to check the citing papers with google scholar.
I understand if you feel like you are hiding the fact that you had those suspicious results from the beginning, but you didn’t figure out they were important until later. Also, the important thing is to correct the mistake in any form. If your boss finds the correction low priority, discuss with your colleague that did some work for for the new paper, and try to find the time to present your boss with a draft.