My day to day life is populated with many who do not understand the lessons in this section. Interaction with these people is paramount in achieving my own goals; I am facing a situation in which the rational choice is to communicate irrationally. In specific, my colleagues and other associates seem to prefer “applause lights” and statements which offer no information. Therefore, attaining my personal, rationally selected goals might mean claiming irrational beliefs. I don’t think this is an explicit paradox, but it is an interesting point. There is a middle ground between “other-optimizing” (pointing out these applause lights as what they are) and changing my actual beliefs to those communicated by “applause lights”, but I do not believe it is tenable, and it may represent a conflict of goals (personal success in my field vs. spreading rational thought). Perhaps it is a microcosm of the precarious balance between self-optimization and world-optimization.
ZacHirschman
Prior to lurking here and reading the excellent posts on rationality, I had never before considered eating a tomato. I decided that I didn’t like them at a young age, and never revisited the belief. In the past, I figured it was my business and it wasn’t hurting anyone if I decided to avoid tomatoes. Now, I understand that it was an arbitrary preference, that the taste is non-offensive (I may grow to like them), and that they are rich in lycopene (which may be good for you, but almost certainly isn’t bad). In short, I changed a belief I never before thought necessary to revisit. So far, so good.
The impermanence of things is an excellent reason to get really enthusiastic about them.
The idea that I find least entangled but still very potentially beneficial is that politics is the mind-killer. I realize it’s an old sequence, and it doesn’t have much traction here (since LW is ostensibly un-killed minds).
It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others’ frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn’t prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.
I’m also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.
Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.
I don’t mean to advocate an epiphany-driven model of discovery.
To use your Scientology example and terminology, what I am advocating is not that we find the “next big thing,” but that we pursue refinement of the original, “genuinely useful material.” Of course, it is much easier to advocate this than to put the work in, but that’s why I’m using the open thread.
There are some legitimate issues with some of the Sequences (both resolved and unresolved). The comments represent a very nice start, but there may be some serious philosophical work to be done. There is a well of knowledge about pursuing wells of knowledge, and I would find it purposeful to refine the effective pursuit of knowledge.
What are your heuristics for telling whether posts/comments contain “high-quality opinions,” or “LW mainstream”? Also, what did you think of Loosemore’s recent post on fallacies in AI predictions?
“What if they kicked the mirror-maker out of town and awarded the actual worker?”
This is the question I keep asking myself. In the story as written, the village rewards the clever skilled worker over the diligent skilled worker. This might work in the short term, and the clever worker’s gamble pays off for him personally as he sees increased business from increased prestige. If we consider the village (or the judges) to be actors in the game, however, they act in their own disinterest by disincentivizing craftsmanship in favor of craftiness. And here I am, arguing for or against a parable...
It seems to me that this discomfort is not a necessary product of the behavior. It may even be a cognitive bias, on the order of thinking that unconditional love is more powerful than conditional love. I submit that a rationalist should expect his or her prospective partners to “calculate their love” and not be afraid of the results.
Your statement has a nice “should” in it. The reason for people not to shun you is because their discomfort is based on a (debatably) flawed heuristic.
In many cases, discomfort is a natural part of changing one’s mind. I can see, though, why romance would be an exception. Discomfort due to unrequited affections, for example, is not evidence of an impending paradigm shift. Discomfort due to a rational calculus, however, might indicate a high likelihood of irrationality.
The cognitive theory is beyond me, but the math looks interesting. I need to exert more thought on this, but I would submit an open Question for the community: might there be a way to calculate error bounds on outputs conditioned on “world models” based on the models’ predictive accuracy and/or complexity? If this were possible, it would be strong support for mathematical insight into the “meta model”.
Well said again, and well-considered that ideas in minds can only move forwards through time (not a physical law). My initial reaction to this article was, “What about philosophy of science?” However, it seems my PoSc objections extend to other realms of philosophy as well. Thank you for leading me here.
I’m looking at the possible causal relationships between certain actions and resultant discomfort. As I understand your argument, you believe that certain actions by one person will always result in discomfort by the other. I disagree, and I submit that the discomfort is a product of the original action and its response. In other words, if someone has made you feel uncomfortable, it may be possible for you to reduce that discomfort independently of the precipitating action. Your discomfort may be due to an irrational bias. This would be a reason not to shun someone for making you feel uncomfortable.
There is a difference between analyzing an action and communicating that you are analyzing an action. To speak to your concluding example, “smiling back” and, “[going] in your head and think about whether or not that signal means that she likes you,” are NOT mutually exclusive. With practice, you can do both at once. I would call this leveling up.
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can’t speak for the authors you cite, but the questions asked by philosophers are different than, “what is the best answer?” They are more along the lines of, “How do we generate our answers anyways?” and “What might follow?” This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy’s sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl’s work is equally well-guided in redeeming philosophers. I don’t think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don’t consider each other’s viewpoints, then every article in which philosophers do sufficiently consider each other’s viewpoints is weak evidence of the opposite.
It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).
Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, “The algorithmization of counterfactuals” contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to “Causes and Explanations—a sturctural model approach,” which was published in 2005 in the British Journal for the PHILOSOPHY of Science.
Popper (or Popperism) predicted that falsifiable models would yield more information than non-falsifiable ones.
I don’t think this is precisely testable, but it references precisely testable models. That is why I would categorize it as philosophy (of science), but not science.
Yes, I may have made an inferential leap here that was wrong or unnecessary. You and I agree very strongly on there being a distinction between Philosophy of Science and Experimental Philosophy. I wanted to draw a distinction between the kind of, “street philosophy” done by Socrates and the more rigorous, mathematical Philosophy of Science. “Experiment” may not have been the most appropriate verbiage.
I would be glad to reconsider my stance that this rationalist community privileges emotivist readings of ethics. I will begin looking into this. My reason for including this argument is the idea (from the article) that when philosophers ask questions about right and wrong or good and bad, they are really asking how people feel about these concepts.
I like your interpretation of philosophy as it pertains to ethics, aesthetics, and perhaps metaphysics. Your Socrates example, and LW in general, privileges emotivist ethics, but this is an interesting point and not a drawback. Looking at ethics as a cognitive science is not necessarily a flawed approach, but it is important to consider the potential alternative models.
Philosophy has a branch called “philosophy of science” where your dissolution falls apart. Popperian falsifiability, Kuhnian paradigm shifts, and Bayesian reasoning all fall into this domain. There is a great compendium by Curd and Cover; I recommend searching the table of contents for essays also available online. Here, philosophers experiment with the precision of testable models rather than hypotheses.
I see that I used the word “growth” capriciously. I don’t necessarily mean greater numbers, I mean the opposite of stagnation. Of course a call for action is easier and less effective than acting, but that’s why we have open threads.
I think of it as “improvematism.” Maybe “improvementism” would sound more serious.
A few thoughts on Mark_Friedenbach’s recent departure:
I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, “good luck and best wishes.” We are here for reasons, and when those reasons wane, I wouldn’t begrudge anyone looking elsewhere or doing other things.
(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. I wouldn’t mind reading if someone took a crack at a Sequences 2.0, or something completely different. Perhaps something with a more empirical/scientific focus (as opposed to foundational/philosophical), as Mark recommended.