I disagree. The point of the post is not that these theories were on balance equally plausible during the Renaissance. It’s written so as to overemphasize the evidence for geocentrism, but that’s mostly to counterbalance standard science education.
In fact, one my key motivations for writing it—and a point where I strongly disagree with people like Kuhn and Feyerabend—is that I think heliocentrism was more plausible during that time. It’s not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.
Thank you! I was quite nervous about posting but am very happy with the reception, and strongly update towards how remarkable a community LW2.0 might become (in terms of how welcoming it is of truth-seeking discussion and how constructively it forwards it).
Reading your comment, I’d update towards the relative importance of mathematical aesthetic compared to physical plausibility in finding true theories. I only want to believe in luck as a last resort. You seem to be making the “opposite” update. Is this correct? And, if it is, why do you update that way?
In order for me to update on this it would be great to have concrete examples of what does and does not consistute “nontrivial theoretical insights” according to you and Paul.
E.g. what was the insight from the 1980s? And what part of the AG(Z) architecture did you initially consider nontrivial?
I’m looking forward to reading that post.
Yes, it seems right that gradient descent is the key crux. But I’m not familiar with any efficient way of doing it that the brain might implement, apart from backprop. Do you have any examples?
What a great post! Very readable, concrete, and important. Is it fair to summarize it in the following way?
A market/population/portfolio of organizations solving a big problem must have two properties:
1) There must not be too much variance within the organizations.
This makes sure possible solutions are explored deeply enough. This is especially important if we expect the best solutions to seem bad.
2) There must not be too little variance among the organizations.
This makes sure possible solutions are explored widely enough. This is especially important if we expect the impact of solutions to be heavy-tailed.
Speculating a bit, evolution seems to do it this way. In order to move around there are wings, fin, legs and crawling bodies. But it’s not like dog babies randomly get born with locomotive capacities selected form those, or mate with species having other capacities.
The final example you give, of top AI researchers trading models with people in the community, seem a great example of this. People build their own deep models, but occasionally bounce them off each other just to inject the right amount of additional variance.