Less Wrong Lacks Representatives and Paths Forward
In my understanding, there’s no one who speaks for LW, as its representative, and is *responsible* for addressing questions and criticisms. LW, as a school of thought, has no agents, no representatives – or at least none who are open to discussion.
The people I’ve found interested in discussion on the website and slack have diverse views which disagree with LW on various points. None claim LW is true. They all admit it has some weaknesses, some unanswered criticisms. They have their own personal views which aren’t written down, and which they don’t claim to be correct anyway.
This is problematic. Suppose I wrote some criticisms of the sequences, or some Bayesian book. Who will answer me? Who will fix the mistakes I point out, or canonically address my criticisms with counter-arguments? No one. This makes it hard to learn LW’s ideas in addition to making it hard to improve them.
My school of thought (Fallible Ideas – FI – https://fallibleideas.com) has representatives and claims to be correct as far as is known (like LW, it’s fallibilist, so of course we may discover flaws and improve it in the future). It claims to be the best current knowledge, which is currently non-refuted, and has refutations of its rivals. There are other schools of thought which say the same thing – they actually think they’re right and have people who will address challenges. But LW just has individuals who individually chat about whatever interests them without there being any organized school of thought to engage with. No one is responsible for defining an LW school of thought and dealing with intellectual challenges.
So how is progress to be made? Suppose LW, vaguely defined as it may be, is mistaken on some major points. E.g. Karl Popper refuted induction. How will LW find out about its mistake and change? FI has a forum where its representatives take responsibility for seeing challenges addressed, and have done so continuously for over 20 years (as some representatives stopped being available, others stepped up).
Which challenges are addressed? *All of them*. You can’t just ignore a challenge because it could be correct. If you misjudge something and then ignore it, you will stay wrong. Silence doesn’t facilitate error correction. For information on this methodology, which I call Paths Forward, see: https://curi.us/1898-paths-forward-short-summary BTW if you want to take this challenge seriously, you’ll need to click the link; I don’t repeat all of it. In general, having much knowledge is incompatible with saying all of it (even on one topic) upfront in forum posts without using references.
My criticism of LW as a whole is that it lacks Paths Forward (and lacks some alternative of its own to fulfill the same purpose). In that context, my criticisms regarding specific points don’t really matter (or aren’t yet ready to be discussed) because there’s no mechanism for them to be rationally resolved.
One thing FI has done, which is part of Paths Forward, is it has surveyed and addressed other schools of thought. LW hasn’t done this comparably – LW has no answer to Critical Rationalism (CR). People who chat at LW have individually made some non-canonical arguments on the matter that LW doesn’t take responsibility for (and which often involve conceding LW is wrong on some points). And they have told me that CR has critics – true. But which criticism(s) of CR does LW claim are correct and take responsibility for the correctness of? (Taking responsibility for something involves doing some major rethinking if it’s refuted – addressing criticism of it and fixing your beliefs if you can’t. Which criticisms of CR would LW be shocked to discover are mistaken, and then be eager to reevaluate the whole matter?) There is no answer to this, and there’s no way for it to be answered because LW has no representatives who can speak for it and who are participating in discussion and who consider it their responsibility to see that issues like this are addressed. CR is well known, relevant, and makes some clear LW-contradicting claims like that induction doesn’t work, so if LW had representatives surveying and responding to rival ideas, they would have addressed CR.
BTW I’m not asking for all this stuff to be perfectly organized. I’m just asking for it to exist at all so that progress can be made.
Anecdotally, I’ve found substantial opposition to discussing/considering methodology from LW people so far. I think that’s a mistake because we use methods when discussing or doing other activities. I’ve also found substantial resistance to the use of references (including to my own material) – but why should I rewrite a new version of something that’s already written? Text is text and should be treated the same whether it was written in the past or today, and whether it was written by someone else or by me (either way, I’m taking responsibility. I think that’s something people don’t understand and they’re used to people throwing references around both vaguely and irresponsibly – but they haven’t pointed out any instance where I made that mistake). Ideas should be judged by the idea, not by attributes of the source (reference or non-reference).
The Paths Forward methodology is also what I think individuals should personally do – it works the same for a school of thought or an individual. Figure out what you think is true *and take responsibility for it*. For parts that are already written down, endorse that and take responsibility for it. If you use something to speak for you, then if it’s mistaken *you* are mistaken – you need to treat that the same as your own writing being refuted. For stuff that isn’t written down adequately by anyone (in your opinion), it’s your responsibility to write it (either from scratch or using existing material plus your commentary/improvements). This writing needs to be put in public and exposed to criticism, and the criticism needs to actually get addressed (not silently ignored) so there are good Paths Forward. I hoped to find a person using this method, or interested in it, at LW; so far I haven’t. Nor have I found someone who suggested a superior method (or even *any* alternative method to address the same issues) or pointed out a reason Paths Forward doesn’t work.
Some people I talked with at LW seem to still be developing as intellectuals. For lots of issues, they just haven’t thought about it yet. That’s totally understandable. However I was hoping to find some developed thought which could point out any mistakes in FI or change its mind. I’m seeking primarily peer discussion. (If anyone wants to learn from me, btw, they are welcome to come to my forum. It can also be used to criticize FI. http://fallibleideas.com/discussion-info) Some people also indicated they thought it’d be too much effort to learn about and address rival ideas like CR. But if no one has done that (so there’s no answer to CR they can endorse), then how do they know CR is mistaken? If CR is correct, it’s worth the effort to study! If CR is incorrect, someone better write that down in public (so CR people can learn about their errors and reform; and so perhaps they could improve CR to no longer be mistaken or point out errors in the criticism of CR.)
One of the issues related to this dispute is I believe we can always proceed with non-refuted ideas (there is a long answer for how this works, but I don’t know how to give a short answer that I expect LW people to understand – especially in the context of the currently-unresolved methodology dispute about Paths Forward). In contrast, LW people typically seem to accept mistakes as just something to put up with, rather than something to try to always fix. So I disagree with ignoring some *known* mistakes, whereas LW people seem to take it for granted that they’re mistaken in known ways. Part of the point of Paths Forward is not to be mistaken in known ways.
Paths Forward is a methodology for organizing schools of thought, ideas, discussion, etc, to allow for unbounded error correction (as opposed to typical things people do like putting bounds on discussions, with discussion of the bounds themselves being out of bounds). I believe the lack of Paths Forward at LW is preventing the resolution of other issues like about the correctness of induction, the right approach to AGI, and the solution to the fundamental problem of epistemology (how new knowledge can be created).
- 3 Dec 2017 23:41 UTC; 15 points) 's comment on Any Good Criticism of Karl Popper’s Epistemology? by (
- 12 Aug 2020 0:05 UTC; 4 points) 's comment on The Law of Least Effort Contributes to the Conjunction Fallacy by (
- The Critical Rationalist View on Artificial Intelligence by 6 Dec 2017 17:26 UTC; 0 points) (
- 1 Dec 2017 2:24 UTC; -1 points) 's comment on Open Letter to MIRI + Tons of Interesting Discussion by (
I think I’m the person curi is referring to as “some people” throughout his post. There is a long-ass continuing thread starting about here in which curi tries to convert me to the Sole True Way of Correct Thinking and isn’t making much headway.
So all y’all don’t have to worry—the lazy underdeveloped intellectual is me.
P.S. The tl;dr of OP seems to be:
Take me to your leader!
...
What kind of savages are you that you don’t even have a Glorious Leader and a Big Sacred Book?!??
You’re incorrect. I discussed with various other people, including gjm, on Slack. I also discussed with other people on the forum some, both in the past and recently. Also I don’t refer to “some people” throughout the post, only in one paragraph.
I do consider your approach (Lumifer) to have Paths Forward problems. I don’t think you are even claiming to be a representative of Less Wrong and to do the various things I say LW lacks. So you’re a reasonably typical example, not a counter-example.
Correct. There is no Pope of LW, we don’t all agree about everything, and no one has any obligation to answer anyone else’s objections. That may be inconvenient for some purposes, but that’s how it is.
This is also how it is for many other things besides LW. Suppose someone claims that there is something wrong with science; there is no Pope of Science any more than there is one of LW, no one responsible for defining how scientists do their thing or answering criticisms. The same goes for all of the following: atheism, Protestantism, conservatism, environmentalism, reductionism, mathematical intuitionism, moral nonrealism, “Big Bang” cosmology. And, in fact, for pretty much everything else.
There are movements that have a clearly defined set of doctrines and a clearly defined representative, such that if the representative’s positions are refuted then the whole movement is sunk. These movements are generally called cults. Empirically, being such a movement doesn’t seem to be conducive to valuable things like good thinking, success in persuading others, or making the movement’s members successful outside the movement.
LW is not the kind of thing that can be “true”. Nor, really, is (say) “the LW philosophy” in so far as there is one—because a philosophy is a complicated machine with lots of moving parts and almost certainly some things in it aren’t quite right.
And, of course, because really there isn’t a precisely-defined thing that is “the LW philosophy” or “what LW people believe” or whatever; LW is a community, not a cult, and we don’t expect to agree about everything.
You’ll probably find plenty of us who will happily endorse some weaker claim, though; something like “most of the ideas found in the Sequences, in Scott’s less-blatantly-speculative posts, etc., are pretty good ideas”.
Well, perhaps we won’t. But if we do, it will probably go like this: someone active on LW learns of a problem; posts about it on LW / discusses it in person with other LW people / posts it in some other venue where LW people hang out; it gets discussed, and (perhaps after a few iterations, since “you’re doing it all wrong” tends to be a hard lesson to learn even when true) other LW people are gradually convinced; then either we start doing things differently, or (if not a large enough fraction are convinced) some people go off and form their own (sub)community where things are done differently.
There is no procedure that guarantees not staying wrong. Always addressing every challenge is one way to try to avoid that, but it does not guarantee not staying wrong and the fact that not always addressing every challenge doesn’t guarantee not staying wrong is not a strong argument against it.
But LW does not have “representatives” in the sense you describe, and isn’t likely to start having them. However, LW has addressed CR in a weaker sense: e.g., some time ago you came here and tried to persuade us to abandon LW-style probabilistic inference in favour of your version of critical rationalism; the community took a look at your ideas, was not impressed, and downvoted your posts to hell.
Of course that may have been a mistake. (Again: there is no guarantee of not making mistakes. Not ever.) But that’s what addressing CR looks like for a community like LW: someone proposes it, making the best case they can, then people take a look, argue about it as seems appropriate, and see what they think.
You say that as if “still developing as intellectuals” implies some sort of immaturity. I hope to continue developing as an intellectual until I die, though empirically it seems most people don’t manage to do that :-).
It is possible that I haven’t seen everything you’re referring to here. But the things I have seen, on LW and on the LW Slack, that I think you’re referring to are not accurately described by the words above. What I’ve seen people say is much more like “I don’t think your particular ideas, from what I have seen about them, merit the particular effort you are saying I should put in”. Or, more generally, “It is not feasible to give a deep investigation to every single rival idea that comes along.”
These are not the same proposition as “it would be too much effort to learn about and address rival ideas”: they are both compatible with the idea that some ideas might justify the deeper investigation that most of the LW community evidently hasn’t yet been persuaded by your arguments to give to CR.
You are welcome to believe that. I don’t think you should find it surprising that we are largely unconvinced. Not least because the only actual argument you’ve offered for this, other than saying “go to my website”, is the claim that it’s always a mistake to leave anything at all unaddressed—which I suspect seems to others here, as it seems to me, an obvious mistake in its own right.
Asking for someone who thinks some set of ideas is consistent and true, and will address questions about those ideas thoroughly, is not asking for a Pope. It’s more like asking for someone who’s more than an casual fan.
One purpose that the lack of a serious LW advocate is “inconvenient” for is truth-seeking—a rather important case!
No. But curi was asking for more than that. E.g., he wants someone who “speaks for LW”. He wants them to do it “as [LW’s] representative”. He wants them to address arguments against LWish ideas “canonically”. He wants someone “responsible for defining an LW school of thought”. And so forth.
And, as I said above, this is just not how most communities or schools of thought work, nor should it be, nor I think could it be. Except for ones where in order to claim any sort of affiliation you are required to sign up to a particular body of doctrine. That mostly means religions, political parties, etc. And (again, as I said above) groups of that sort don’t have an encouraging record of successfully distinguishing truth from error; I don’t think we should be emulating them.
if someone spoke for something smaller than LW, e.g. Bayesian Epistemology, that’d be fine. CR and Objectivism, for example, can be questioned and have people who will answer (unlike science itself).
and if someone wanted to take responsibility for gjm-LW or lumifer-LW or some other body of ideas which is theirs alone, that’d be fine too. but people aren’t doing this as a group or individually!
The fact that objectivism has cultists who want to defend the objectivist way isn’t a quality that’s worthy of emulation. If CR is copying the same group think structures that’s also no argument in favor of it either.
I like Ayn Rand’s writing, not whatever you think is a “cult”. See e.g. http://curi.us/1930-harry-binswanger-refuses-to-think
If you have an argument about Ayn Rand’s ideas, that would be important.
Regardless, you can get correct answers to tons of common questions about Objectivism at a variety of places online (including both pro-ARI and anti-ARI places). That’s good. And Binswanger, linked negatively above, engaged with Popperian criticism more than anyone at LW has. He also has combined seriously writing down ideas with discussing ideas, whereas LW people seem to only do much of one or the other, which I think is a big problem.
Speaking for “objectivism” instead of someone personal opinions implies structures that get people think alike in a cultish way.
You can, of course, go and bother Eliezer. I doubt he would be inclined to listen to you, though.
Eliezer has already indicated [1] he’d prefer to take administrative action to prevent discussion than speak to the issues. No Paths Forward there!
[1] http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/3wf5
That’s … not a very accurate way of describing what happened. Not because there’s literally no way to understand it that makes it factually correct, but because it gives entirely the wrong impression.
Here’s a more complete description of what happened.
curi came here in early April 2011 (well, he actually first appeared earlier, but before then he made a total of three comments ever) and posted five lengthy top-level posts in five days. They were increasingly badly received by the community, getting scores of −1,-1,-1,-22,-38. The last one was entitled “The conjunction fallacy does not exist” and what it attempted to refute was a completely wrong statement of what the c.f. is about, namely the claim (which no one believes) that “people attribute higher probability to X&Y than to Y” for all X and Y.
As this was happening, more and more of the comments on curi’s posts were along the general lines of this one saying, in essence: This is not productive, you are just repeating the same wrong things without listening to criticism, so please stop.
It was suggested that there was some reason to think curi was using sockpuppets to undo others’ downvotes and keep enough karma to carry on posting.
And then, in that context, curi’s fifth post—which attempted to refute the conjunction fallacy but which completely misunderstood what the conjunction fallacy is, and which was sitting on −38 points—was removed.
Now, maybe that’s because Eliezer was afraid of curi’s ideas and wanted to close down discussion or something of the sort. But a more plausible explanation is that he thought further discussion was likely to be a waste of time for the same reason as several commenters.
I don’t think removing the post was a good decision, and generally I think Eliezer’s moderation has been too heavy-handed on multiple occasions. But I don’t think the kind of explanation curi is offering for this is at all likely to be correct.
On the other hand, if curi is merely saying that Eliezer is unlikely to be interested if curi contacts him and asks for a debate on Bayes versus CR, then I think he’s clearly right about that.
Yep, sounds like Eliezer. No surprises.
Well, both Lumifer and I have (mostly in different venues) been answering a lot of questions and criticisms you’ve posed. But no, I don’t think either of us feels “responsibility” in the specific (and, I think, entirely non-standard) sense you’re using here, where to “take responsibility” for a set of ideas is to incur a limitless obligation to answer any and all questions and criticisms made of those ideas.
there are methods for doing Paths Forward with limited resource use. you just don’t want to learn/discuss/use them.
The total of what your “paths forward” page says about limited resources: (1) instead of writing your own answers to every criticism, you can point critics to already-written things that address their criticisms; (2) if you have a suitable forum with like-thinking other people there, they may address the criticisms for you.
Perhaps it seems to you that these make it reasonable to have a policy of addressing every criticism and question despite limited resources. It doesn’t seem so to me.
I have read your document, I am not convinced by your arguments that we should attempt to address every single criticism and question, I am not convinced by your arguments that we can realistically do so, and I think the main practical effects of embracing your principles on this point would be (1) to favour obsessive cranks who have nothing else to do with their time than argue about their pet theories, (2) to encourage obsessive-crank-like behaviour, and (3) to make those who embrace them spend more time arguing on the internet. I can’t speak for others, but I don’t want to give advantages to obsessive cranks, I don’t want to become more obsessive and cranky myself, and I think it much more likely that I spend too much time arguing on the internet rather than too little.
I see nothing to suggest that further investigation of “paths forward” is likely to be a productive use of my time.
So: no, I don’t want to spend more time learning, discussing, or using “paths forward”. I think it would be a suboptimal way to use that time.
By Jove, I think you got it!
:-D
And who are you, a freshly-minted account with strong opinions?
Both here and on the LW slack, “Justin CEO” turned up at about the same time as curi and has done more or less nothing other than agreeing with curi and disagreeing with people who are disagreeing with him.
This is perfectly consistent with Justin not being a sockpuppet of curi, of course.
BTW I don’t even think anything I said was particularly opinionated (by my standards).
And the context makes it funny.
Saying—on the LW forum—that it’d be good for LW to have a strong advocate? Ooooh, controversial!
No, not particularly opinionated. But it’s an interesting place and time you chose for diving into LW.
is it really that interesting? i posted a rough draft of “Less Wrong Lacks Representatives and Paths Forward” to the FI forum for comment. i routinely post links and comments about discussions i’m having to FI. is this surprising?
Not you—RealJustinCEO.
that wasn’t clear enough? he is a member of the FI forum. he followed one of my links.
Yep, I’m a freshly-minted account with strong opinions.
downvotes aren’t arguments. addressing ideas, in my post, refers to intellectually addressing them – e.g. explaining why an idea is incorrect.
anyway, do you have any suggestion as to a Path Forward to get the intellectual disagreements resolved?
also did you actually read about Paths Forward? If so, why don’t you reply to it directly and point out a mistake in it?
Downvoting is not an argument because downvoting is a judgement that an idea is not worthy of “intellectually addressing” (on this forum). That’s not not addressing an idea.
I did not claim that downvotes are arguments, of course. What they are is assessments. As it happens, your posts about CR here got comments as well as downvotes.
Not necessarily, just as if we were visited by fundamentalists demanding that everything be “proved from scripture” I would not necessarily have a suggestion as to how to Prove From Scripture that their fundamentalism was wrong.
And I think that “to get the intellectual disagreements resolved” is a noble but hilariously overoptimistic goal. We are not, realistically, going to end up agreeing about everything, and picking an approach on the basis of whether it could in principle lead to us agreeing about everything is not a good idea.
Yes.
Because I think many other things I could do with the same time I could use for that would be more productive.
I gave this a strong upvote, not because I agree with this, but because I think you make some good points that need to be heard.
Even though I don’t think there should be a Less Wrong pope or official doctrinal creed, I agree that having some champions who go out and debate these principles could be pretty cool!
Also, Paths Forward is certainly an interesting idea, even if it is overly idealistic in term of how much effort it expects people to invest.
I’m thankful for this state. Nobody represents my beliefs, not even me. They are free-floating and do not need a spokesperson or representative to make them true (or correct them when they’re not). Actually, beliefs are only models anyway, and there is no “true” except to the extent that they correlate so some observed states of the universe.
There are plenty of system of systems of thought out there and there are opportunity costs to spending energy delving into different systems of thoughts. Nothing you did here suggests to me that it’s worthwhile to invest a significant amount of time into delving into CR.
Whether or not individual X did Y is not a major point as far most people on LW are concerned.
Of course, part of the growth mindset is about constantly developing.
Creating a School of Thought would violate the idea of keeping identity small. We do sometimes use terms like “aspiring rationalist” or speak of LW but that’s not the focus of our intellectual pursuits. We only use labels like that when they are useful.
We have plenty of discussions where people change their minds.
In one of EY recent posts he described how he updated on String theory physicists knowing more than he previously thought because he took a bet.
Betting has the advantage of letting reality decide what’s right. That’s more important than providing clever arguments in favor of a position and as such it’s valued more highly (or at least we try to value it more highly).
This is addressed in Paths Forward. You’re just plain ignoring what I said. You aren’t engaging with the answers to this that I already provided; you aren’t pointing out where my reasoning was mistaken; you’re just acting like half my ideas don’t exist at all instead of actually arguing with them.
As I explained already, PF is important on an individual basis and you should all individually do it regardless of whether LW does.
Induction was refuted decades ago and you guys aren’t updating and don’t have a mechanism to become less wrong about this.
And I told you that you haven’t made a case that suggests that it’s worth reading.
How do you know that we aren’t updating? It seems to me like you are using induction to make that assessment. You observe that we don’t update towards your arguments and you conclude that we don’t update in general.
Exactly. That is by design. See the title of the site? It doesn’t say “MoreRight”. Here even Yudkowski, the Founding Father, was frequently disagreed upon.
This is the School-less school.
There are paths forward but it’s no one else’s job to show them to you or help you find them.
The attitude of expectation is one that does not fit here and will not encourage anyone to help you. I also expect it runs into walls in other cultures too.
Additional info:
http://curi.us/2067-empiricism-and-instrumentalism
http://curi.us/2065-open-letter-to-machine-intelligence-research-institute
http://curi.us/2066-replies-to-gyrodiot-about-fallible-ideas-critical-rationalism-and-paths-forward
http://curi.us/2063-criticism-of-eliezer-yudkowsky-on-karl-popper
As some have already said, this is considered a feature, not a bug. We do not care (or try not to care) about “what is the LW way?”. Instead we (try to) focus on “how is it, really?”. To quote Eliezer, who is closest to being the representative of LW:
Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
So, it feels like you would like to have a phone number of the Great Teacher, to ask him about the color of the sky. While this site is—if I may continue the metaphor—trying to teach you how to actually look at the sky, and explaining how the human eye perceives colors.
If you find that Sequences say “A” and truth is actually “B”, what you can do is write an article on LW explaining why “B” is true. (Pointing out that Sequences say “A” is optional; I think it would be better done afterwards, so that people can debate “B” independently. But do as you wish.)
It may happen is that different people will give different opinions. But then you can let them argue against each other.
Here I may be just talking about myself, but I seek progress at a completely different place. I don’t care that much about playing with words, which many intelligent people, including you, seem to be so fond of. I see humans, including myself, as deeply imperfect beings. No matter how much I am told “X”, no matter how much I in theory agree with “X”, if I pay enough attention, I find myself going against “X” all the time. Thus, instead of having yet another debate about virtues of “X”, I would rather spend my attention trying to practice “X”. Because as long as there is a huge gap between what I profess and what I actually do, it does not matter much whether I profess correct ideas. Actually, talking about rationality, it may be even worse. The ideas I profess can be not only right or wrong, but possibly also irrelevant, or confused, or utterly meaningless.
You linked a website. Let me just look at the first article: “Why is Reason Important?”. You talk about something called “Reason”. Do you mean some hypothetical ideal of reason, or how smart but imperfect people actually do it? Oh wait, let me ask even more important question: Are you even aware that there is a distinction between these two? Because the article does not reflect that.
Still reading the first paragraph: “Reason also rejects the idea that authorities can or should tell us what the truth is. Instead, we should judge ideas ourselves, and based on the content of the idea not the person who said it. Even if I am the person who said an idea, and I have a PhD, that doesn’t count for anything”… Really? What is your opinion on the existence of atoms, or theory of relativity? I mean, the Einstein guy is just some unimportant rando; so did you develop the whole theory on your own? Did you do all the relevant experiments to confirm that atoms do indeed exist? Wait, I have a more important question: Even if you have personally verified the theory of relativity, why did you even decide that verifying this theory is worth your time? I mean, (1) there are millions of possible theories, and you certainly cannot verify all of them, and (2) the fact that Einstein and a few others believe in some specific theory “X” means absolutely nothing before you verified it for yourself, right? So, why did you even choose to pay attention to the theory of relativity, if Einstein’s words mean nothing, and there were million other potential theories competing for your attention?
...this was just an example of what I meant by “playing with words”. You wrote a whole website of arguments that I guess seem convincing to you, and yet I find mistakes in the very first paragraph of the very first article. If you can imagine that this is how I feel about almost each paragraph of each article on your website, you can understand why I am unimpressed, and why I don’t want to go this way.
Okay, I am curious: did someone already tell you something similar to what I just did? If yes, could you please give me a pointer to how it was addressed?
you missed the intended point about representatives. the point is that anyone takes any responsibility for the ideas they believe are true. the point is e.g. that anyone be available to answer questions about some idea. if the idea has no representatives in the sense of people who think it’s good and answer questions about it, then that’s problematic. then it’s hard to learn and there’s no one to improve or advocate it.
And then people don’t like me, b/c i’m a heretic who denies induction, so they ignore it. when there is no mechanism for correcting errors, what you end up with is bias: people decide to pay attention, or not, according to social status, bias, etc.
For all X? E.g. “don’t murder”? This part isn’t clear.
The tradition of reason deals with both. It offers some guiding principles and ideals, as well as practical guidance, rules of thumb, tips, etc. People have knowledge of both of these.
I am familiar with some science and able to make some judgements about scientific arguments myself. Especially using resources like asking questions of physicists I know and using books/internet. I don’t helplessly take people’s words for things; i seek out explanations at the level of detail i’m interested in and make a judgement. And science is an interest of mine.
I have no criticism of the atomic theory, no objection to it. I know some stuff about it and I agree. I don’t know of any contrary position that’s any good. I’m convinced by the reasoning, not the prestige of the reasoners.
I didn’t personally do all the experiments. Why should I? I don’t accept an experiment merely b/c the person who did it had a PhD, but I don’t automatically reject it either. I make a judgement about the experiment (or idea) instead of about the person’s credentials.
I paid attention to physics, initially, because I found the arguments in the book The Fabric of Reality high quality and interesting. The book looked interesting to me so I read the opening paragraphs online and I thought they were good so I got the book. I didn’t look for the book with the most prestigious author. I don’t see what these historical detail matter, but you asked about them. Physics is important (we live in the physical world; we’re made of atoms; we move; etc) and worthy of interest (though others are welcome to pursue other matters).
tl;dr: I won’t take Einstein’s word for it, but I can be impressed by his reasoning.
let’s not jump to conclusions before discussing the matter. we disagree, or there is a misunderstanding.
Have you tried posting here an article about why induction is wrong? Preferably starting with an explanation of what you mean by “induction”, just to make sure we are all debating the same thing.
Of course there is a chance that people will ignore the article, but I would be curious to learn e.g. why evolution gave so many organisms the ability of reinforcement learning, if the fundamental premise of reinforcement learning—that things in future are likely to be similar to things in the past—is wrong.
(Yeah, that’s me writing at midnight, after my daughter finally decides to go sleep. Sorry for that.)
What I mean was that for me personally, the greatest obstacle in “following reason” is not the reasoning part, but rather the following part. (Using the LW lingo, the greatest problem is not epistemic rationality, but instrumental rationality.) I feel quite confident that I am generally good at reasoning, or at least better than most of the population. What I have problem is to actually follow my own advice. Therefore, instead of developing smarter and smarter arguments, I rather wish to become better at implementing the things I already know.
And I suspect this is the reason why CFAR focuses on things like “trigger-action planning” et cetera, instead of e.g. publishing articles analysing the writings of Popper. The former simply seems to provide much more value than the latter.
Sometimes the lessons seem quite easy—the map is not the territory; make sure you communicate meaning, not just words; be open to changing your mind in either direction; etc—yet even after years of trying you are still sometimes doing it wrong. People enjoy “insight porn”, but what they need is practicing the boring parts until they become automatic.
But do you privilege the hypothesis, if you heard it from a person with PhD?
Oh, I guess this may be another thing that I rarely find outside of LW: reasoning in degrees of gray, instead of black and white. I am not asking whether you take each Einstein’s word as sacred. I am asking whether you increase the probability of something, if you learn that Einstein said so.
if i were to provide an anti-induction article, what properties should it have?
apparently it should be different in some way than the ones already provided by Popper and DD, as individual book chapters.
one question is whether it should assume the reader has background knowledge of CR.
if so, it’s easy, it’ll be short … and people here won’t understand it.
if not, it’ll be long and very hard to understand, and will repeat a lot of content from Popper’s books.
what about a short logical argument about a key point, which doesn’t explain the bigger picture? possible, but people hate those. they don’t respond well to them. they don’t just want their view destroyed without understanding any alternative. and anyway their own views are too vague to to criticize in a quick, logical way b/c whatever part you criticize, they can do without. there is no clear, essential, philosophical core they are attached to. if advocates of induction actually knew their own position, in exacting detail, inside and out, then you could quickly point out a logical flaw and they’d go “omg, that makes everything fall apart”. but when you deal with people who aren’t very clear on their own position, and who actually think all their beliefs are full of errors and you just have to muddle through and do your best … then what kind of short argument will work?
Regardless of the topic, I would say that the article should be easy to read, and relatively self-contained. For example, instead of “go read this book by Popper to understand how he defines X” you could define X using your own words, preferably giving an example (of course it’s okay to also give a quote from Popper’s book).
I don’t even know what the abbreviation is supposed to mean. Seriously.
Generally, I think that the greatest risk is people not even understanding what you are trying to say. If you include links to other pages, I guess most people will not click them. Aim to explain, not to convince, because a failure in explaining is automatically also a failure in convincing.
Maybe it would make sense for you to look at the articles that I believe (with my very unclear understanding of what you are trying to say) may be most relevant to your topic:
1) “Infinite Certainty” (and its mathy sequel “0 And 1 Are Not Probabilities”), and
2) “Scientific Evidence, Legal Evidence, Rational Evidence”.
Because it seems to me that the thing about Popper and induction is approximately this...
Simplicio: “Can science be 100% sure about something?”
Popper: “Nope, that would mean that scientists would never change their minds. But they sometimes do, and that is an accepted part of science. Therefore, scientists are never 100% sure of their theories.”
Simplicio: “Well, if they can’t prove anything with 100% certainty, why don’t we just ignore them completely? It’s just another opinion, right?”
Popper: “Uhm… wait a minute… scientists cannot prove anything, but they can… uhm… disprove things! Yeah, that’s what they do; they make many theories, they disprove most of them, and the one that keeps surviving is the official winner, for the moment. So it’s not like the scientists proved e.g. the theory of relativity, but rather that they disproved all known competing theories, and failed to disprove the theory of relativity (yet).”
To which I would give the following objection:
1) How exactly could it be impossible to prove “X”, and yet possible to disprove “not X”? If scientists are able to falsify e.g. the hypothesis that “two plus two does not equal four”, isn’t it the same as proving the hypothesis that “two plus two equals four”?
I imagine that the typical situation Popper had in mind included a few explicit hypotheses, e.g. A, B, C, and then a remaining option “something else that we did not consider”. So he is essentially saying that scientists can experimentally disprove e.g. B and C, but that’s not the same as proving A. Instead, they proved “either A, or something else that we did not consider, but definitely neither B nor C”. Shortly: B and C were falsified, but A wasn’t proven. And as long as there remains an unspecified category “things we did not consider”, there is always a chance that A is merely an approximate solution, and the real solution is still unknown.
But it doesn’t always have to be like this. Especially in math. But also in real life. Consider this:
According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it. And recently, theory of relativity was indeed falsified by an experiment. Does it mean we should stop teaching the theory of relativity, because now it was properly falsified?
With the benefit of hindsight, now we know there was a mistake in the experiment. But… that’s exactly my point. The concepts of “proving” and “falsifying” are actually much closer than Popper probably imagined. You may have a hypothesis “H”, and an experiment “E”, but if you say that you falsified “H”, it means you have a hypothesis “F” = “the experiment E is correct and falsifies the theory H”. To falsify H by E is to prove F; therefore if F cannot be scientifically proven, then H cannot be scientifically falsified. Proof and falsification are not two fundamentally different processes; they are actually two sides of the same coin. To claim that the experiment E falsifies the hypothesis H, is to claim that you have a proof that “the experiment E falsifies the hypothesis H”… and the usual interpretation of Popper is that there are no proofs in science.
The answer generally accepted on LessWrong, I guess, is that what really happens in science is that people believe theories with greater and greater probability. Never 100%. But sometimes with a very high probability instead, and for most practical purposes such high probability works almost like certainty. Popper may insist that science is unable to actually prove that moon is not made of cheese, but the fact is that most scientists will behave as if they already had such proof; they are not going to keep an open mind about it.
.
Short version: Popper was right about inability to prove things with 100% certainty, but then he (or maybe just people who quote him) made a mistake of imagining that disproving things is a process fundamentally different from proving things, so you can at least disprove things with 100% certainty. My answer is that you can’t even disprove things with probability 100%, but that’s okay, because the “100%” part was just a red herring anyway; what actually happens in science is that things are believed with greater probability.
You should probably actually read Popper before putting words in his mouth.
You found this claim in a book of his? Or did you read some Wikipedia, or what?
For example, this is a quote from the Stanford Encyclopedia of Philosophy:
You guys still do that whole “virtue of scholarship” thing, or what?
Well, this specific guy has a job and a family, and studying “what Popper believed” is quite low on his list of priorities. If you want to provide a more educated answer to curi, go ahead.
If you have a job and a family, and don’t have time to get into what Popper actually said, maybe don’t offer your opinion on what Popper actually said? That’s just introducing bad stuff into a discussion for no reason.
Wovon man nicht sprechen kann, darüber muss man schweigen.
“The virtue of silence.”
Yeah, good points in both comments. Why don’t you come to my forum where we’ll appreciate them? :)
https://groups.yahoo.com/neo/groups/fallible-ideas/info
I don’t think you and I have much to talk about.
Why?
a. virtue of silence
b. it’s your job to work that out.
What happened to NVC (Non-Violent Communication)? Your comments are purely intended to hurt me.
No. That’s your interpretation. You have agency too to interpret what I say with clarity. You also value bold conjecture. So that’s again your problem to work out what I mean and how to apply it.
I have a thought. Since you are a philosopher, would your valuable time not be better spent doing activities philosophers engage in, such as writing papers for philosophy journals?
Rather than arguing with people on the internet?
If you are here because you are fishing for people to go join your forum, may I suggest that this place is an inefficient use of your time? It’s mostly dead now, and will be fully dead soon.
I have a low opinion of academic philosophers and philosophy journals. I was hoping to find a little intelligence somewhere. I have tried a lot of places. If you have better suggestions than philosophy journals or LW, let me know.
The virtue of silence is one of our 12 virtues here. That you don’t know speaks to ignorance on your part. And perhaps on taking your own advice you might not have made this post at all. And maybe you would have learnt something instead.
Do you even know the name of Popper’s philosophy? Did you read the discussions about this that already happened on LW?
It seems that you’re completely out of your depth, can’t answer me, and don’t want to make the effort to learn. You can’t answer Popper, don’t know of anyone or any writing that can, and are content with that. Your fellows here are the same way. So Popper goes unanswered and you guys stay wrong.
FYI Popper has lots of self-contained writing. Many of his book chapters are adapted from lectures, as you would know if you’d looked. I have written recommendations of which specific parts of Popper are best to read with brief comments on what they are about:
http://fallibleideas.com/books#popper
Everything you say in your post, about Popper issues, demonstrates huge ignorance, but there are no Paths Forward for you to get better ideas about this. The methodology dispute needs to be settled first, but people (including you) don’t want to do that.
I generally agree with your judgment (assuming that the “effort to learn” refers strictly to Popper).
But before I leave this debate, I would like to point out that you (and Ilya) were able to make this (correct) judgment only because I put my cards on the table. I wrote, relatively shortly and without obfuscation, what I believe. Which allowed you to read it and conclude (correctly) “he is just an uneducated idiot”. This allowed a quick resolution; and as a side effect I learned something.
This may or may not be ironically related to the idea of falsification, but at this moment I feel unworthy to comment on that.
Now I see two possible futures, and it is more or less your choice which one will happen:
Option 1:
You may try to describe (a) your beliefs about induction, (b) what you believe are LW beliefs about induction, and (c) why exactly are the supposed LW beliefs wrong, preferably with a specific example of a situation where following the LW beliefs would result in an obvious error.
This is the “high risk / high reward” scenario. It will cost you more time and work, and there is a chance that someone will say “oh, I didn’t realize this before, but now I see this guy has a point; I should probably read more of what he says”, but there is also a chance that someone will say “oh, he got Popper or LW completely wrong; I knew it was not worth debating him”. Which is not necessarily a bad thing, but will probably feel so.
Yeah, there is also the chance that people will read your text and ignore it, but speaking for myself, there are two typical reasons why I would do that: either is text is written in a way that makes it difficult for me to decipher what exactly the author was actually trying to say; or the text depends on links to outside sources but my daily time budget for browsing internet is already spent. (That is why I selfishly urge you to write a self-contained article using your own words.) But other people may have other preferences. Maybe the best would be to add footnotes with references to sources, but make them optional for understanding the gist of the article.
Option 2:
You will keep saying: “guys, you are so confused about induction; you should definitely read Popper”, and people at LW will keep thinking: “this guy is so confused about induction or about our beliefs about induction; he should definitely read the Sequences”, and both sides will be frustrated about how the other side is unwilling to spend the energy necessary to resolve the situation. This is the “play it safe, win nothing” scenario. Also the more likely one.
Last note: Any valid argument made by Popper should be possible to explain without using the word “Popper” in text. Just like Pythagorean theorem is not about the person called Pythagoras, but about squares on triangles, and would be equally valid if instead it would be discovered or popularized by a completely different person; you could simply call it “squares-on-triangles theorem” and it would work equally well. (Related in Sequences: “Guessing the teacher’s password”; “Argument Screens Off Authority”.) If something is true about induction, it is true regardless of whether Popper did or didn’t believe it.
when i asked for references to canonical LW beliefs, i was told that would make it a cult, and LW does not have beliefs about anything. since no pro-LW ppl could/would state or link to LW’s beliefs about induction – and were hostile to the idea – i think it’s unreasonable to ask me to. individual ppl at LW vary in beliefs, so how am i supposed to write a one-size-fits-all criticism? LW ppl offer neither a one-size-fits-all pro-induction explanation nor do any of them offer it individually. e.g. you have not said how you think induction works. it’s your job, not mine, to come up with some version of induction which you think actually works – and to do that while being aware of known issues that make that a difficult project.
again, there are methodology issues. unless LW gives targets for criticism – written beliefs anyone will take responsibility for the correctness of (you can do this individually, but you don’t want to – you’re busy, you don’t care, whatever) – then we’re kinda stuck (given also the unwillingness to address CR).
your refusal to use outside sources is asking me to rewrite material. why? some attempt to save time on your part. is that the right way to save time? no. could we talk about the right ways to save time? if you wanted to. but my comments about the right way to save time are in outside sources, primarily written by me, which you therefore won’t read (e.g. the Paths Forward stuff, and i could do the Popper stuff linking only to my own stuff, which i have tons of, but that’s still an outside source. i could copy/paste my own stuff here, but that’s stupid. it’s also awkward b/c i’ve intentionally not rewritten essays already written by my colleagues, b/c why do that? so i don’t have all the right material written by myself personally, on purpose, b/c i avoid duplication.). so we’re kinda stuck there. i don’t want to repeat myself for literally more than the 50th time, for you personally (who hasn’t offered me anything – not even much sign you’ll pay attention, care, keep replying next week, anything), b/c you won’t read 1) Popper 2) Deutsch 3) my own links to myself 4) my recent discussions with other LW ppl where i already rewrote a bunch of anti-induction arguments and wasn’t answered.
as one example of many links to myself that you categorically don’t want to address:
http://curi.us/1917-rejecting-gradations-of-certainty (including the comments)
In the linked article, you seem to treat “refutation by criticism” as something absolute. Either something is refuted by criticism, or it isn’t refuted by criticism; and in either case you have 100% certainty about which one of these two options it is.
There seems to be no space for situations like “I’ve read a quite convincing refutation of something, but I still think there is a small probability there was a mistake in this clever verbal construction”. It either “was refuted” or it “wasn’t refuted”; and as long as you are willing to admit some probability, I guess it by default goes to the “wasn’t refuted” basket.
In other words, if you imagine a variable containing value “X was refuted by criticism”, the value of this variable at some moment switches from 0 to 1, without any intermediate values. I mean, if you reject gradations of certainty, then you are left with a black-and-white situation where either you have the certainty, or you don’t; but nothing in between.
If this is more or less correct, then I am curious about what exactly happens in the moment where the variable actually switches from 0 to 1. Imagine that you are doing some experiments, reading some verbal arguments, and thinking about them. At some moment, the variable is at 0 (the hypothesis was not refuted by criticism yet), and at the very next moment the variable is at 1 (the hypothesis was refuted by criticism). What exactly happened during that last fraction of a second? Some mental action, I guess, like connecting two pieces of a puzzle together, or something like this. But isn’t there some probability that you actually connected those two pieces incorrectly, and maybe you will notice this only a few seconds (or hours, days, years) later? In other words, isn’t the “refutation by criticism” conditional on the probability that you actually understood everything correctly?
If, as I incorrectly said in previous comments, one experiment doesn’t constitute refutation of a hypothesis (because the experiment may be measured or interpreted incorrectly), then what exactly does? Two experiments? Seven experiments? Thirteen experiments and twenty four pages of peer-reviewed scientific articles? Because if you refute “gradations of certainty”, then it must be that at some moment the certainty is not there, and at another moment there is… and I am curious about where and why is that moment.
Throwing books at someone is generally known as “courtier’s reply”. The more text you throw at me, the smaller probability that I would read them. (Similarly, I could tell you to read Korzybski’s Science and Sanity, and only come back after you mastered it, because I believe—and I truly do—that it is related to some mistakes you are making. Would you?)
There are some situations when things cannot be explained by a short text. For example, if a 10-years old kid would ask me to explain him quantum physics in less than 1 page of text, I would give up. -- So let me ask you; is Popper’s argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?
Sometimes the original form of the argument is not the best one. For example, Gödel spent hundreds of pages proving something that kids today could express as “any mathematical theorem can be stored on computer as a text file, which is kinda a big integer in base 256”. (Took him hundreds of pages, because people didn’t have computers back then.) So maybe the book where Popper explained his idea is similarly not the most efficient way to explain the idea. Also, if an idea cannot be explained without pointing to the original source, that is a bit suspicious. On the other hand, of course, not everyone is skilled at explaining, so sometimes the text written by a skilled author has this advantage.
Summary:
I believe that your belief in “refutation by criticism” as something that either is or isn’t, but doesn’t have “gradation of certainty”, is so fundamentally wrong that it doesn’t make sense to debate further. Because this is the whole point of why probabilistic reasoning, Bayes theorem, etc. is so popular on LW. (Because probabilities is what you use when you don’t have absolute certainty, and I find it quite ironic that I am explaining this to someone who read orders of magnitude more of Popper than I did.)
The issue here also is Brandolini’s law:
“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”
The problem with the “courtier’s reply” is you could always appeal to it, even if Scott Aaronson is trying to explain something about quantum mechanics to you, and you need some background (found in references 1, 2, and 3) to understand what he is saying.
There is a type 1 / type 2 error tradeoff here. Ignoring legit expert advice is bad, but being cowed by an idiot throwing references at you is also bad.
As usual with tradeoffs like these, one has to decide on a policy that is willing to tolerate some of one type of error to keep the error you care about to some desired level.
I think a good heuristic for deciding who is an expert and who is an idiot with references is credentialism. But credentialism has a bad brand here, due to a “love affair with amateurism” LW has. One of the consequences of this love affair is a lot of folks here make the above trade off badly (in particular they ignore legit advice to read way too frequently).
Here’s a tricky example of judging authority (credentials). You say listen to SA about QM. Presumably also listen to David Deutsch (DD), who knows more about QM than SA does. But what about me? I have talked with DD about QM and other issues at great length and I have a very accurate understanding of what things I cay say about QM (and other matters) that are what DD would say, and when I don’t know something or disagree with DD. (I have done things like debate physics, with physicists, many times, while being advised by DD and him checking all my statements so I find out when I have his views right or not.) So my claims about QM are about as good as DD’s, when I make them – and are therefore even better than SA’s, even though I’m not a physicist. Sorta, not exactly. Credentials are complicated and such a bad way to judge ideas.
What I find most people do is decide what they want to believe or listen to first, and then find an expert who says it second. So if someone doesn’t want to listen, credentials won’t help, they’ll just find some credentials that go the other way. DD has had the same experience repeatedly – people aren’t persuaded due to his credentials. That’s one of the main reasons I’m here instead of DD – his credentials wouldn’t actually help with getting people here to listen/understand. And, as I’ve been demonstrating and DD and I already knew, arguments aren’t very effective here either (just like elsewhere).
And I, btw, didn’t take things on authority from DD – I asked questions and brought up doubts and counter-arguments. His credentials didn’t matter to me, but his arguments did. Which is why he liked talking with me!
ROFL
And here I was, completely at loss as to why David Deutsch doesn’t hang out at LW… But now we know.
you’re mean and disruptive. at least you’re demonstrating why credentials are a terrible way to address things, which is my point. you just assume the status of various credentials without being willing to think about them, let alone debate them (using more credentials (regress), or perhaps arguments? but if arguments, why not just use those in the first place?). so for you, like most people, using credentials = using bias.
Woo, kindergarten flashbacks!
Am I? Please demonstrate.
What do you mean by bias? In statistics bias is one of those things you trade off against other things (like variance). Being unbiased is not always optimal.
Yeah, credentials are a poor way of judging things. But that first paragraph doesn’t show remotely what you think it does.
Some of David Deutsch’s credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.
Your credentials as a credible authority on quantum mechanics: You assure us that you’ve talked a lot with David Deutsch and learned a lot from him about quantum mechanics.
This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don’t.
It’s not clear to me what argument you’re actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that’s flatly untrue.
They are not, though. It’s standard “what LW calls ‘Bayes’ and what I call ‘reasoning under uncertainty’”—you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.
You might say “oh, I heard about how argument screens authority.” This is actually not true though, even by “LW Bayesian” lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn’t true.
It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.
The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of “experts” writ large, except for an explicitly enumerated subset (perhaps ones EY or other “recognized community thought leaders” liked).
This is a part of community DNA, starting with EY’s stuff, and Luke’s “philosophy is a diseased discipline.”
That is crazy.
Actually, I somewhat agree, but being an agreeable sort of chap I’m willing to concede things arguendo when there’s no compelling reason to do otherwise :-), which is why I said “Yeah, credentials are a poor way of judging things” rather than hedging more.
More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don’t consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn’t sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn’t have the ability for control that doctrine as well as reporting it.)
It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn’t do this so well; that theology does it worse; that homeopathy does it worse still. (This isn’t just about the moral or cognitive excellence of the disciplines in question; it’s also that it’s harder to tell whether someone’s any good or not in some fields than in others.)
I guess the way I would slice disciplines is like this:
(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)
(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.
Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.
Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.
Re: (b), why are we talking about (b) at all?
i think this is false, and is an indication of using the wrong methods to refute bullshit – the right methods reuse refutations of categories of bad ideas. do you have some comprehensive argument that it must be true?
i find it disturbing how much people here are in favor of judging ideas by sources instead of content – credentialism. that’s pretty pure irrationality. also debating which credentials are worth how much is a bad way to approach discussions, but it’s totally non-obvious and controversial which credentials are how good even for standard credentials like PhDs from different universities.
Is English your first language?
The context matters. If you are trying to figure out how X actually works you probably should go read or at least scan the relevant books even if no one is throwing references at you. On the other hand, if you’re just procrastinating by engaging in a Yet Another Internet Argument with zero consequences for your life, going off to read the references is just a bigger waste of time.
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion. That leaves me thinking you’re a bad person to discuss with. Am I mistaken?
Making mistakes isn’t random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability? And won’t that judgement also have a probability, causing an infinite regress of probability assignments?
Mistakes are unpredictable. At least some of them are. So you can’t predict (even probabilistically) whether you made one of the unpredictable types of mistakes.
What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
That’d work fine if they knew everything or nothing about induction. However, it’s highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy’s misconceptions). The misconceptions include vague parts they don’t realize are vague, non sequiturs they don’t realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).
Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don’t have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: “Science has discovered there are many things your eyes can’t see, including trillions of other universes with copies of you, me, the Earth, the sun, everything.”
It’s like you believe “A” and “A implies B” and “B implies C”, while I believe “non-A” and “non-A implies Q”. The point we should debate is whether “A” or “non-A” is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with “A”, the other starting with “non-A”).
I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn’t have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.
Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.
Scenario A: I made this judgement at 10 AM, after having a good night of sleep.
Scenario B: I made this judgement at 2 AM, tired and sleep deprived.
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.
So while I couldn’t make an exact calculation like “the probability of the mistake is exactly 4.25%”, I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don’t know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)
It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.
Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is… complicated. Difficult to do, and even more difficult to justify in a debate.
This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it’s merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn’t actually there.
Relevant LW article: “Confidence levels inside and outside an argument”.
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
You can have a “binary” solution only as long as you remain in the realm of words. (“Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.”) Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn’t feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: “the energy of the particle X is 0.04 units”; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)
Fair point.
(BTW, I’m going offline for a week now; for reasons unrelated to LW or this debate.)
EDIT:
For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say “yes”; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn’t even know exactly how many nines are there after the decimal dot. (But again, there is a difference between “believing there is a probability” and “being able to tell the exact number”.)
The most obvious impact of probabilistic reasoning on my behavior is that I generally don’t trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be “the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable”. (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)
It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you’re already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn’t work in all cases. So you can’t figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can’t combine “unknown” and “90%” to get an overall probability).
I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There’s always framework questions too – e.g. you may say according to your framework, given your context, then you’re unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn’t know the probability your framework has a mistake, so you can’t get an overall probability this way.
if you’re already aware that your system doesn’t really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they’re wrong in some way wasn’t enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn’t care to refute.)
the hard limit is we don’t have certainty, we’re fallible. that’s it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
you have to make a decision about what standards of evidence you will use for what purpose, and why that’s the right thing to do, and expose that meta decision to criticism.
the epistemology issues we’re talking about are prior to the physics issues, and don’t involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren’t probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)
see http://curi.us/2067-empiricism-and-instrumentalism and https://yesornophilosophy.com
no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.
i do not value certainty as a feeling. i’m after objective knowledge, not feelings.
That isn’t what Viliam said, and I suggest that here you’re playing rhetorical games rather than arguing in good faith. It’s as if someone took your fallibilism and your rejection of probability, and said “Since you admit that you could well be wrong and you have no idea how likely it is that you’re wrong, why should we take any notice of what you say?”.
You mean “the philosophy which claims to have a solution to this problem”. (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)
Anyway, I think the answer depends on what you mean by “study”. If you mean “investigate at all” then the answer is that several people here have considered some version of Popperian “critical rationalism”, so your question has a false premise. If you mean “study in depth” then the answer is that by and large those who’ve considered “critical rationalism” have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.
My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else’s, is that I’ve read a couple of Deutsch’s books and some of his other writings and given Deutch’s version of “critical rationalism” hours, but not weeks, of thought, and that since you turned up here I’ve given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. “solves” the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren’t explicitly about random events, even less credible; that the “return on investment” of further in-depth investigation of Popper’s or Deutsch’s ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they’re all bad ideas but because I think I already grasp them well enough for my purposes.
A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to “resolve epistemology” in a way that ignores such errors. (Perhaps that isn’t what you meant by “we can talk about measurement error after resolving epistemology”, in which case some clarification would be a good idea.)
You say that as if you expect it to be a new idea around here, but it isn’t. See e.g. this old LW article. For the avoidance of doubt, I’m not claiming that what that says about knowledge and certainty is the same as you would say—it isn’t—nor that what it says is original to its author—it isn’t. Just that distinguishing knowledge from certainty is something we’re already comfortable with.
You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)
Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is “obviously” a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.
(And when you complain above of an infinite regress, it’s precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren’t going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn’t seem to me that it does.)
you haven’t cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.
by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.
it’s dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as “some c.r. proponents”.
no. i have tried and found it’s impossible, and found out why (arguments u don’t wish to learn).
anyway i don’t see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren’t really looking to have a conversation to resolve the matter. why speak at all?
Your understanding of “resolve the matter” is very peculiar—as far as I can see it means “go read what I tell you to read so that you will agree with me”.
I notice that you show considerable lack of flexibility: you follow a certain pattern of interaction which, to no great surprise, tends to end up in the same place, you get nowhere and accuse people of bad faith and unwillingness to learn.
You’ve been hanging around the place for a few weeks by now—how about you, did you learn anything? Or this is strictly a bring-civilization-to-the-savages expedition from your point of view?
Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Why?
Don’t wish to learn them? True enough. I don’t see your relationship to me as being that of teacher to learner. I’d be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.
It is supposed to point out some errors in things you wrote, and to answer some questions you raised.
Does that actually mean anything? If so, what?
I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose (“paths forward”), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about “paths forward” that you keep trying to replace it with.
See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It’s not my problem that you’re unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.
[EDITED because I noticed I wrote “conservation” where I meant “conversation” :-)]
you have openly stated your unwillingness to
1) do PF
2) discuss PF or other methodology
that’s an impasse, created by you. you won’t use the methodology i think is needed for making progress, and won’t discuss the disagreement. a particular example issue is your hostility to the use of references.
the end.
given your rules, including the impasse above.
Yup. I’m not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don’t try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So … nah, I’m not buying “created by you”. I’m not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.
You just made that up. I am not hostile to the use of references.
(Maybe I objected to something you did that involved the use of references; I don’t remember. But if I did, it wasn’t because I am hostile to the use of references.)
the moderators here actually just threatened my friend with a ban for posting a link to one of my articles about our philosophical disagreements, and deleted the thread. it was this one about empiricism and instrumentalism (not quite induction, but closely related): http://curi.us/2067-empiricism-and-instrumentalism
the reason you have trouble applying reason is b/c u understand reason badly. it’s easy if u understand it well enuf. the idea/action gap is a matter of flaws in the ideas – both having the wrong ideas and also having incomplete ideas. ideas are what you need. nothing but ideas can help/save you.
insight porn sucks because its ideas aren’t good enough, and are designed to impress people with standard memes, not to be useful. it’s a trap which you shouldn’t mix up with real philosophy.
also you ask about posting an anti-induction article. i wrote a number of anti-induction arguments both on the forums and in slack, which have not been answered. i also gave references to more, which have not been answered. why should everything be repeated for each individual who comes along and doesn’t want to read references?
repeating arguments for people unwilling to look at the literature is not productive. it takes so much effort to understand philosophy that the effort of doing some reading is table stakes. people who don’t want to do that are unserious. and you only have to read until the first mistake, and then comment. and if you’re wrong about that first mistake, you can look for the second one and also take the matter more seriously. and by the 5th incorrect mistake i expect your full attention.
the methodology disagreements need to come before the induction disagreement or we won’t be discussing induction using the same rules of discussion.
and you ask me to define induction so we’re on the same page. that’s part of the problem. ppl are LW are not on the same page, and want to all be addressed individually – which is too much work, and anyway none of them take responsibility for finding the truth, they all just quit after a small amount of discussion, as i expect you to as well. if you want to learn, join FI ( http://fallibleideas.com/discussion-info ) and ask and ppl will help you. or read. http://fallibleideas.com/books or look through the discussions i already had here (both recently and years ago) and answer the points that others did not. you can find logs of the slack chats at https://groups.yahoo.com/neo/groups/fallible-ideas/info
more broadly, inductivists vary so much – and most barely know anything about induction. so there’s no really short one-size-fits-all way to address the issue. it’s a big topic. hence lots of important arguments – and, perhaps more importantly, extensive explanation of the alternative.
if you really want an anti-induction article, one of the best things you could do, first, is give me a pro-induction article you endorse, and stand behind, and take responsibility for. shouldn’t that come first? but when i asked for canonical LW material that would be appropriate to respond to, and that anyone would care if it was mistaken … i was flamed. lay out your positive claims in a serious way – stick your neck out as CR has – before asking for refutation of your unspecified positive claims.
and no i don’t give ideas probabilities. https://yesornophilosophy.com
Justin used to be merely “a member of the FI forum. he followed one of my links”. But now it turns out you’re a team?
Mirrors. They are a thing, you should look into one.
Clearly, your valuable time is wasted here. You probably should go find emptier vessels to fill with your wisdom.
I described to you my approach to induction. Were there any fatal flaws you noticed but didn’t mention?
Yes, we understand your approach has problems :-P
Stop making hostile assumptions, I wasn’t even talking about Justin.
Please keep posting here. Your powers of persuasion are amazing.
i wrote some commentary on Paths Forward, thought people might appreciate it:
http://justinmallone.com/2017/01/paths-forward-comments-part-1/
Don’t bother with these deadbeats. It’s been over four years since I mentioned Kirly Prokastian, and there hasn’t been a flicker of interest.
Who is Kirly Prokastian? Is there a typo? I only got one google hit which was an LW thread.
Is there a different site you’d recommend I bother with instead? I’ve been looking lots of places!
BTW feel free to come post at my forum http://fallibleideas.com/discussion-info
Boy am I embarrassed. Not only did I butcher the spelling, it’s not even a person, it’s a publishing company. Information here.
Thank you for your invitation. I’ll get in touch if I have anything worth saying.