I’m a little confused by what you are referring to here so if you are willing to spell it out I would appreciate it but no worries either way. Many very fascinating ideas in your other comment, I’ll try to respond in a day or two.
rogersbacon
Not intentional—thanks!
And he disclosed his name because the New York Times published it—https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html
I’ve also discussed the paper with him and he didn’t seem to have an issue with it.
Ha I like the Einstein example! I think about the “bold leaps” thing a lot—we may be in kind of “epistemic hell” with respect to certain ideas/theories i.e. all small steps in that direction will seem completely false/irrational (the valley between us and the next peak is deep and wide). Maybe not perfect but I think the problem of inheritance as you describe in the Bakewell article fits as an example here. Heredity was much more complex than we thought and the problem was complicated by the fact that we had lots of wrong but vaguely reasonable ideas that came from essentially mythical figures like Aristotle. The idea that we should study a very simple system and collect huge amounts of data until a pattern emerges and then go from there instead of armchair theorizing was kind of a crazy idea, which is why a monk was the one to do it and no one realized how important it was until 40 years later.
The question is how do we create individuals that are capable of making huge jumps in knowledge space and environments that encourage them to do so. Anything that sounds super reasonable is probably not radical enough (which is why this is so difficult). Like you say, it can’t be too crazy, but we need people who will go incredibly far in one direction while starting with a premise that is highly speculative but not outright wrong. One example might be panpsychism—we need an Einstein who takes panpsychism as brute fact and then attempts to reconstruct physics from there. My own wild offering is that ideas are alive, not in the trivial sense of a meme, but as complex spatiotemporal organisms, or maybe they are endosymbionts that are made of consciousness in the same way we are made of matter (see Ideas are Alive and You are Dead). Before the microscope we couldn’t really conceive how a life form could be that small, maybe there is something like that going on here as well and new tools/theories will lead to the discovery of an entirely new domain of life. Obviously this is crazy but maybe this is an example of the general flavor of crazy we need to explore.
…one reason is just that they are very thorough about exploring the possibility space, where a human would have long since gotten bored, said “this is stupid”, and moved on—it was stupid, but it wasn’t stupid enough, and if they had persisted long enough, it would’ve wrapped around from idiocy to genius. Our satisficing nature undermines our search for truly novel solutions; we aren’t inhumanly patient enough to find them.
One reason that people might persist in something way past boredom or reasonable justification is religious faith or some kind of irrational conviction arising from a spiritual experience. From a different angle, Tyler Cowen also offers some thoughts on why the important thinkers of the future will be religious:
Third, religious thinkers arguably have more degrees of freedom. I don’t mean to hurt anybody’s feelings here, but…how shall I put it? The claims of the religions are not so closely tied to the experimental method and the randomized control trial. (Narrator: “Neither are the secular claims!”) It would be too harsh to say “they can just make stuff up,” but…arguably there are fewer constraints. That might lead to more gross errors and fabrications in the distribution as a whole, but also more creativity in the positive direction. And right now we seem pretty hungry for some breaks in the previous debates, even if not all of those breaks will be for the better.
I don’t think Mendel was particularly inspired by his religious faith to study heredity (I might be wrong) but it certainly didn’t stop him and in the broad sense it enabled him to be an outsider who could dedicate extended study to something seemingly trivial. As you pointed towards, being an outsider is crucial if someone is to take these kinds of bold leaps. Among other things, being an insider makes it harder to get past what you described at the end of the Origins of Innovation article:
Perhaps there is some sort of psychological barrier, where the mind flinches at any suggestion bubbling up from the subconscious that conflicts with age-old tradition or with higher-status figures. Should any new ideas still manage to come up, they are suppressed; “don’t rock the boat”, don’t stand out (“the innovator has for enemies all those who have done well under the old conditions”)
This is the fundamental reasoning behind an article I wrote that was recently published in New Ideas in Psychology – “Amateur hour: Improving knowledge diversity in psychological and behavioral science by harnessing contributions from amateurs” (author access link). Amateurs can think and do research in ways that professionals can’t by virtue of not facing the incentives and constraints that come with having a career in academia. We identify six “blind spots” in academia that amateurs might focus on – long-term research, interdisciplinary, speculative, uncommon or taboo topics, basic observational research, and aimless projects). This led us to write:
Taken together, our discussion of blind spots highlights one overarching direction in “research-space” that may be especially promising: long, aimless, speculative, and interdisciplinary research on uncommon or taboo subjects. Out of all amateur contributions to sciences so far, Darwin’s achievements may be the primary exemplar of this type of endeavor. As aforementioned, at the time of his departure on the HMS Beagle in 1831 he was an independent scientist—a 22-year-old Cambridge graduate with no advanced publications who had to pay his own way on the voyage (Bowlby, 1990; Keynes & Darwin, 2001). Darwin’s work on evolution certainly took a long time to develop (the Beagle’s voyage took 5 years and he did not publish On the Origin of Species until 23 years after he returned). It was aimless in the sense that he did not set out from the beginning to develop a theory of evolution. His work was highly interdisciplinary (Darwin drew on numerous fields within the biological sciences in addition to geology and economics), was the culmination of a huge amount of basic observational work, and was not necessarily an experimental contribution (though he did make those as well), but primarily theoretical (and sometimes more speculative) in nature. Darwin’s theories were taboo in the sense that they went against the prevailing theological ideas of the time and caused significant controversy (and still do). We speculate that there may one day be a “Charles Darwin of the Mind” who follows a similar path. Indeed, it seems that the state of theorizing in psychology today is at an early stage comparable to evolutionary theorizing at the time of Darwin (Muthukrishna & Henrich, 2019), and the time may be ripe for an equally transformative amateur contribution in psychology. We hope that this paper provides the smallest nudge in this direction.
I actually just posted about the article here because we mention LessWrong as an example of a community where amateurs make novel research contributions in psychology – “LessWrong discussed in New Ideas in Psychology article”.
So if I had to guess – the next Darwin/Einstein/Newton will be an amateur/outsider, religious or for some reason have some weird idea that they pursue to the extreme, and have some kind of life circumstance that allows them to do this (maybe like Darwin they come from money).
I also touch on this theme in my article “The Myth of Myth of the Lone Genius”. Briefly, we have put too much cultural emphasis in science on incrementalism, on standing on the shoulders of giants. Sure, most discoveries come from armies of scientists making small contributions, but we need to also cultivate the belief that you can make a radical discovery by yourself if you try really really hard. I also quote you at the beginning of the article.
“The Great Man theory of history may not be truly believable and Great Men not real but invented, but it may be true we need to believe the Great Man theory of history and would have to invent them if they were not real.”
LessWrong discussed in New Ideas in Psychology article
Thanks for catching the grammar mistake—fixed! These are interesting extensions of the basic idea of using more randomness in science, thanks for sharing. Your last point makes me think about the use of prediction markets to guess which studies will replicate, something that people have successfully done.
Your point is well taken, and we should definitely keep in mind that randomness can also create perverse incentives and can easily be overdone. However, I would argue that there is virtually no randomness in science now and ample evidence that we are bad at evaluating grants, papers, applicants and are generally overly conservative when we do evaluate (see Conservatism in Science for a review). In rare cases, I might advocate for pure randomness but, like you suggest, I think some kind of mixed strategy is probably the way to go in most cases. For example, with grants we can imagine a strategy where there is a quick review to rule out obvious nonsense and then maybe place grants in high quality and low quality with the number of slots allocated to those categories accordingly (you could also just limit people to one submission to get rid of spamming problem).
A few examples of us being bad at evaluating things:
“I just did a retrospective analysis of 2014 NeurIPS … There was no correlation between reviewer quality scores and paper’s eventual impact.”
“Analysing data from 4,000 social science grant proposals and 15,000 reviews, this paper illustrates how the peer-review scores assigned by different reviewers have only low levels of consistency (a correlation between reviewer scores of only 0.2). From: Are peer-reviews of grant proposals reliable? An analysis of Economic and Social Research Council (ESRC) funding applications
For hiring decisions, it might be even worse—is this person truly a better scientist or did they just happen to land in a more productive research lab for their PhD? Will this person make a better graduate student or did they just go to a better undergraduate college? I would advocate for a threshold (we are fine with hiring any of these people) and then randomness in some hiring situations.
The Last Questions (part 1)
Randomness in Science
Great post - similar to Adam Shai’s comment, this reminds of a discussion in psychology about the over-application of the scientific paradigm. In a push to seem more legitimate/prestigious (physics envy) , psychology has pushed 3rd-person experimental science at the expense of more observational or philosophical approaches.
(When) should psychology be a science? - https://onlinelibrary.wiley.com/doi/abs/10.1111/jtsb.12316
Born Again: Disconnected Psychology, Martian Science, and the Order of the Phoenix
You wouldn’t have got this at all from what I wrote but, we are definitely not saying that it will be easy to integrate “blind spot” research into academia or that it will happen overnight. A significant portion of the paper is spent providing examples of amateur psychology work (from the past and the present, we reference some of the work on LW), discussing why it is difficult to integrate this knowledge into modern academia, how academia might benefit from doing so, and how we might actually accomplish this over the long run. Certainly we are under no illusions that academics will wake up to all of the valuable intellectual work that happens outside of the confines of academia, but maybe at the very least they will become a little more aware of the limitations of their own work and the value that can be added by engaging with these outsiders.
Blind Spots in Science and Culture
I just don’t really see it as that problematic if a small percentage of scientists spend their time thinking about and working on the paranormal/supernatural because (1) scientists throughout history did this and we still made progress. Maybe it wasn’t necessary that Newton believed in alchemy/theology but he did and belief in these things is certainly compatible with making huge leaps in knowledge like he did, (2) I’m not sure if believing in the possibility of ghosts is more ridiculous than the idea that space and time are the same thing and they can be warped (I’m not a physicist :). UFOs would probably have been lumped into these categories as well and now we know that there are credible reports of anomalous phenomenon. Whether they are aliens or not who knows, but it is possible that studying them could lead to an understanding of new phenomenon (I think it already has led us to understand new rare forms of lightning but I’m forgetting the specifics).
Look, I don’t really believe in these things and I don’t behave as if I did, but I am open to the possibility. The main argument here is that being open to the possibility, having a sense of mystery and epistemic humility, does make a difference in how we think and do science. This kind of goes back to the discussion of paradigm-shifting science/normal science. If absolutely no believes that a paradigm shift is possible then it will never happen. I’m of the opinion that it’s important for us to maintain a kernel of doubt in the hard-headed materialist atheist perspective. In truth, I think we are pretty closely aligned and I am just playing devil’s advocate :)
certainly the authoritarian link is highly speculative, but I think in general we underestimate how politics/culture/psychology influence what we care about and how we think in science. A more extreme version of the question is: how similar would we expect alien science to be to ours? Obviously it would be different if they were much more advanced, but assuming equal levels of progress, how would their very different minds (who knows how different) and culture lead them to think about science differently? In an extreme version, maybe they don’t even see and use something like echolocation—how would this influence their scientific investigation?
“Certainly, we see many example of both theoretical and applied work in many sciences, showing that in this regard the diversity is enough.
About the unifying theory of physics, I’m not that sure about the link with authoritarian culture. But once again, in actual science, there are so many viewpoints and theories and approaches that it would take days to list them for only the smallest subfield of physics. So I’m not convinced that we are lacking diversity in this regard.”
I don’t see how you can make this conclusion, we don’t know what the counterfactual is. Obviously there is a lot of diversity of theories/approaches but that doesn’t mean that we wouldn’t have different theories/approaches if science was born in a different cultural background.
Again, I think these are all open questions, but I think it is reasonable to conclude that it might make a difference on the margins. Really we are asking—how contingent is scientific progress? The answer might be “not very much” but over the long-run of history it may add up.
So little actual knowledge that almost everyone was a “Renaissance man” (and so they literally all shared the same sources)”
Interesting thought—now everyone has to specialize, there are less people who have different combinations of know in a given discipline. Like i talked about with education, i think its worth thinking more about how our education systems homogenize our mental portfolio of people.
Re: tenure—its a good point and certainly we do have some diversity of scientific niches. Its an open question whether we have enough or not, i think my point more anything is just pointing out that this form of diversity also matters.
Radical proposal: we need scientific monasteries, isolated from the world, with celibate science monks dedicating to growing knowledge above all else :)
One point of confusion that I think is running through your comments (and this is my fault for not being clear enough) is how I am conceiving of “mind”. In my conception, a mind is the genetics and all of the environment/past experiences but also the current context of the mind. So for example, yes you would still have the same mind in one sense whether you were doing science in a university or were just an independent scientist, but in another sense no because the thoughts you are willing and able to think would be different because you are facing very different constraints/incentives. Hope this helps.
I actually would disagree with your last point. Certainly cultural/political diversity will matter more for psych/social sciences but I think it will have an effect on what kinds of topics people care about in the first place when it comes to harder sciences and math. I can imagine a culture which has a more philosophical bent to it leading to more people doing theoretical work and a culture which has a greater emphasis on engineering and practicality doing more applied work. I could also imagine a more authoritarian culture leading to people doing physics in a certain style—perhaps more of a search for unifying “theory of everything” type ideas vs. a more democratic and diverse culture leading to a more pluralistic view of the universe. Not saying these would be huge effects necessarily but on the margins it could make a difference.
Hmm yea I see your point. I guess what I was saying is that there are certain thought patterns and styles of cognition which may be more likely to stumble on the kind of ideas or do the kind of work that can potentially lead to paradigm shifts. Whether or not we are less able to think in this way now is definitely an open question but I think one we should worry about.
Do you have any specific examples in mind here that you are willing to share? None are coming to mind off the top of my head and I’d love to have some examples for future reference.