Seeking to exchange ideas and learn about what matters.
Alexander
Explanations as Hard to Vary Assertions
Thank you for pointing this out, by the way. This is an important nuance. I just read this: Simple refutation of the ‘Bayesian’ philosophy of science.
By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.
Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.
Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.
What science really seeks to ‘maximise’ (or rather, create) is explanatory power.
And I am now really confused and conflicted. I would love it if someone could enlighten me on how Deutsch’s definition of explanation (hard-to-vary assertions about reality) and Bayesian probability conflict with each other. I am missing something very subtle here.
For context, I am aware of Popper and falsification, but wouldn’t a theory eventually become practically falsified within Bayesian updating if there is enough evidence against it?
Wow, this is honestly baffling. It sounds as if Deutsch doesn’t know about the generalised form of Bayes’ theorem (I’m sure he does know, which makes me feel worse).
You make an excellent point. Bayes’ theorem can be applied to all possible hypotheses, not just and .
If a top physicist can be this biased, then I cannot be surprised by anything anymore.
Thank you very much for your response Yoav Ravid.
This is a fascinating critique of David Deutsch
and The Beginning of Infinityby one of his former colleagues.It is ironic that Deutch sees himself as an expert on counter-dogma, yet he is dogmatic about his convictions. Cultish Countercultishness springs to mind.
I agree, it is more a critique of Deutsch as a person than of the book. I still think it is a good book overall.
Source (emphasis added by me):
Large ground based telescopes can make images as sharp as or sharper than the Hubble Space Telescope, but only if atmospheric blurring is corrected. Previously, the deformable mirrors available to do this were small, flat, and relatively inflexible. They could be used only as part of complex instruments attached to conventional telescopes.
But in this new work, one of the two mirrors that make up the telescope optics is used to make the correction directly. The new secondary mirror makes the entire correction with no other optics required, making for a more efficient and cleaner system.
Like other secondary mirrors, this one is made of glass over 2 feet in diameter and is a steeply curved dome shape. But under the surface, it is like no other. The glass is less than 2 millimeters thick (less than eight-hundredths of an inch). It literally floats in a magnetic field and changes shape in milliseconds, virtually real-time. Electro-magnetically gripped by 336 computer-controlled “actuators” that tweak it into place, nanometer by nanometer, the adaptive secondary mirror focuses star light as steadily as if Earth had no atmosphere. Astronomers can study precisely sharpened objects rather than blurry blobs of twinkling light.
Is bias within academia ever actually avoidable?
Let us take the example of Daniel Dennett vs David Chalmers. Dennett calls philosophical zombies an “embarrassment,” while Chalmers continues to double-down on his conclusion that consciousness cannot be explained in purely physical terms. If Chalmers conceded and switched teams, then he is going to be “just another philosopher,” while Dennett achieves an academic victory.
As an aspiring world-class philosopher, you have little incentive to adopt the dominant view because if you do you will become just another ordinary philosopher. By adopting a radically different stance, you establish an entirely new “school” and become at its helm. Meanwhile, it would be considerably more effortful to become at the helm of the more well-established schools, e.g. physicalism and compatibilism.
Thus, motivated skepticism and motivated reasoning seem to me to be completely unavoidable in academia.
I tend to agree. It isn’t easy to generalise what entails a successful explanation, especially as one goes higher up the layers of abstraction (as you’ve put it) or further out to the more infeasibly testable realm.
What do you think is an elegant way to define the phenomenon of explanation that is more general than “hard-to-vary assertions about reality”?
I agree with you here. I made a mistake but on the bright side, I learnt a lot about the generalised form of Bayes’ theorem which applies to all possible hypotheses. This was also how Eliezer explained this relationship between the posterior and the numerator in Decoherence is Falsifiable and Testable. I was trying to simplify the relationship between Bayes’ theorem and Deutsch’s criterion for good explanations for the sake of the post but I oversimplified too much.
I still think that Bayes’ theorem and Deutsch’s criterion for good explanation are compatible and in a practical sense, one can be explained in terms of the other but, using the generalised form of Bayes is necessary.
I updated my post to explain that this part is slightly incorrect.
In a literal sense, Eliezer said, “The roots of knowledge are in observation.” If we took this statement in isolation to Deutsch, he would vehemently disagree and tell us, “No, we interpret observations through explanatory theories.” However, I don’t think Eliezer and Deutsch disagree here. Both agree that there is a map and a territory and that the map comprises models, i.e., explanatory theories.
Good points. There were several chapters in Rationality: A-Z dedicating to this. According to Max Tegmark’s speculations, all mathematically possible universes exist, and we happen to be in one described by a simple Standard Model. I suspect that this question about why simple explanations are so effective in this universe is unanswerable but still fun to speculate about.
Good points about the lack of emphasis on hypothesis-formation within the Bayesian paradigm. Eliezer talks about this a little in Do Scientists Already Know This Stuff?
Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity. I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences.
I long for a deeper treatment on hypothesis-formation. Any good books on that?
Incidentally, Popper also thought that you couldn’t falsify a theory unless we have a non-ad hoc alternative that explains the data better.
This is so interesting. Do you know where I can read more about this? Conjectures and Refutations?
It surely is an incentive structure problem. However, I am uncertain about to what extend incentive structures can be “designed”. They seem to come about as a result of thousands of years of culture gene coevolution.
Peer reviews have a similar incentive structure misalignment. Why would you spend a month reviewing someone else’s paper when you can write your own instead? This point was made by Scott Aaronson during one of his AMAs but he didn’t attempt at offering a solution.
You make excellent points. The growth of knowledge is ultimately a process of creativity alternating with criticism and I agree with you that idea generation is under appreciated. Outlandish ideas are met with ridicule most of the time.
This passage from Quantum Computing Since Democritus by Scott Aaronson captures this so well:
[I have changed my attitudes towards] the arguments of John Searle and Roger Penrose against “strong artificial intelligence.” I still think Searle and Penrose are wrong on crucial points, Searle more so than Penrose. But on rereading my 2006 arguments for why they were wrong, I found myself wincing at the semi-flippant tone, at my eagerness to laugh at these celebrated scholars tying themselves into logical pretzels in quixotic, obviously doomed attempts to defend human specialness. In effect, I was lazily relying on the fact that everyone in the room already agreed with me – that to these (mostly) physics and computer science graduate students, it was simply self-evident that the human brain is nothing other than a “hot, wet Turing machine,” and weird that I would even waste the class’s time with such a settled question. Since then, I think I’ve come to a better appreciation of the immense difficulty of these issues – and in particular, of the need to offer arguments that engage people with different philosophical starting-points than one’s own.
I think we need to strike a balance between the veracity of ideas and tolerance of their outlandishness. This topic has always fascinated me but I don’t know of a concrete criterion for effective hypothesis generation. The simplicity criterion of Occam’s Razor is ok but it is not the be-all end-all.
Hello,
My name is Alexander, and I live and work as a software engineer in Australia. I studied the subtle art of computation at university and graduated some years ago. I don’t know the demographics of LessWrong, but I don’t imagine myself unique around here.
I am fascinated by the elegance of computation. It is stunning that we can create computers to instantiate abstract objects and their relations using physical objects and their motions and interactions.
I have been reading LessWrong for years but only recently decided to start posting and contributing towards the communal effort. I am thoroughly impressed by the high-quality standards maintained here, both in terms of the civility and integrity of discussions as well as the quality of software. I’ve only posted twice and have learnt valuable knowledge both times.
My gateway into Rationality has primarily been through reading books. I became somewhat active on Goodreads some years ago and started posting book reviews as a fun way to engage the community and practice critical thinking and idea generation. I quickly gravitated towards Rationality books and binge-read several of them. Rationality and Science books have been formative in shaping my worldview.
Learning the art of Rationality has had a positive impact on me. I cannot prove a causal link, but it probably exists. Several of my friends have commented that conversations with me have brought them clarity and optimism in recent years. A few of them were influenced enough to start frequenting LessWrong and reading the sequences.
I found Rationality: A-Z to be written in a profound and forceful yet balanced and humane way, but most importantly, brilliantly witty. I found this quote from Church vs Taskforce awe-inspiring:
If you’re explicitly setting out to build community—then right after a move is when someone most lacks community, when they most need your help. It’s also an opportunity for the band to grow.
Based on my personal experience, LessWrong is doing a remarkable job building out a community around Rationality. LessWrong seems very aware of the pitfalls that can afflict this type of community.
Over on Goodreads, a common criticism I see of Rationality and Effective Altruism is a fear of cultishness (with the less legitimate critics claiming that Rationality is impossible because Hegel said the nature of reality is ‘contradiction’). Such criticisms tend to be wary of the tendency of such communities towards reinforcing their own biases and applying motivated skepticism towards outsider ideas. However, for what it’s worth, that is not what I see around here. As Eliezer elucidates in Cultish Countercultishness, it takes an unwavering effort to resist the temptation towards cultishness. I hope to see this resistance continuing!
This is my Goodreads profile (removed link for privacy given this is the public internet). You are welcome to add me as a friend if you use Goodreads.
I am considering posting book reviews on LessWrong instead of Goodreads because I love the software quality here, especially the WYSIWYG editor. Goodreads is still stuck on a HTML editor from 1993. However, given the high epistemic standards on LessWrong, I will be slower to post here. I never expect anyone to ask me to provide a source over at Goodreads but here I better be rigorous and prudent with what I say, which is a good thing!
Funny how the top-rated review of this book on Goodreads ignores everything Pinker says about cognitive biases and probabilistic reasoning and claims that “There are no objective facts; such things are self-contradictory” as some strawman rebuttal. If true, then that statement itself is a contradiction.
I find it astonishing that people continue to conflate “rationality” with “objective facts” when the modern meaning of rationality acknowledges that the map is not the territory.
I am considering using Goodreads to manage my bookshelves electronically. But for reviews, I plan to post links to my LessWrong reviews to avoid spending time formatting text for both editors. Formatting text for Goodreads is rather effortful.
I have found the reviews and the discussions on Goodreads to be, on average, more concerned with persuasion than explanation.
Additionally, Goodreads would benefit significantly from a more effective voting system. You can only upvote, so people with a large following tend to dominate, regardless of the veracity or eloquence of what they write.
A common criticism of rationality I come across rests upon the absence of a single, ultimate theory of rationality.
Their claim: the various theories of rationality offer differing assertions about reality and, thus, differing predictions of experiences.
Their conclusion: Convergence on objective truth is impossible, and rationality is subjective. (Which I think is a false conclusion to draw).
I think that this problem is congruent to Moral Uncertainty. What is the solution to this problem? Does a parliamentary model similar to that proposed by Bostrom and Ord make sense here? I am sure this problem has been talked about on LessWrong or elsewhere. Please direct me to where I can learn more about this!
I would like to improve my argument against the aforementioned conclusion. I would like to understand this problem
Oh yes, I didn’t mention the differences between the worldview presented in Rationality: A-Z and that of David Deutsch.
For example, Deutsch is strongly opposed to the dogmatic nature of Empiricism, which is the sixth virtue of rationality in the LessWrong worldview. My take is that Deutsch believes that explanatory theories are more foundational to our understanding of reality than our experiences or observations. He asserts that we interpret our experiences and observations of reality through explanatory theories. He further asserts that experiences and observations are not the sources of our theories. For example, Einstein came up with Relativity with no direct observational data, Einstein didn’t use the perihelion precession of Mercury. Instead, experiences and observations are what we use to judge competing explanatory theories.
I don’t feel too strongly either way at this point in my journey. I think Deutsch makes a good point, but so does Eliezer. I will probably start to feel more strongly about this in one direction or the other as I study more science.