I think you’re interpreting him correctly. Senses means what it normally means, and he probably means something like you shouldn’t trust them naively, thinking things like “heat is light because fire is bright and the sun is bright”, instead you need a methodology (tool) to interpret your sense experience, aggregate your experiences, seek out one’s missing experiences. He might also mean literal tools too. He later describes how to make a primitive thermometer.
I think he would think the science between him and now was a lot better and people are doing the thing he wanted. (Many people a lot of the time though not all of the people all of the time.) He probably would have opinions about p-values and publication bias, etc., but he’d still think things overall have been much better in the last 400 years.
Yeah, definitely some kind of motte and bailey thing going on.
Link Previews are now live.
Subscriptions and New Editor are still being worked on. The new editor will hopefully hit the “opt into experimental features” status soon.
Convert Comments to Posts is hitting some difficulties.
Converting this from a Facebook comment to LW Shortform.
A friend complains about recruiters who send repeated emails saying things like “just bumping this to the top of your inbox” when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they’re simply paid to spam.
Some discussion of repeated messaging behavior ensued. These are my thoughts:
I feel conflicted about repeatedly messaging people. All the following being factors in this conflict:
Repeatedly messaging can be making yourself an asshole that gets through someone’s unfortunate asshole filter.
There’s an angle from which repeatedly, manually messaging people is a costly signal bid that their response would be valuable to you. Admittedly this might not filter in the desired ways.
I know that many people are in fact disorganized and lose emails or otherwise don’t have systems for getting back to you such that failure to get back to you doesn’t mean they didn’t want to.
There are other people have extremely good systems I’m always impressed by the super busy, super well-known people who get back to you reliably after three weeks. Systems. I don’t always know where someone falls between “has no systems, relies on other people to message repeatedly” vs “has impeccable systems but due to volume of emails will take two weeks.”
The overall incentives are such that most people probably shouldn’t generally reveal which they are.
Sometimes the only way to get things done is to bug people. And I hate it. I hate nagging, but given other people’s unreliability, it’s either you bugging them or a good chance of not getting some important thing.
A wise, well-respected, business-experienced rationalist told me many years ago that if you want something from someone you, you should just email them every day until they do it. It feels like this is the wisdom of the business world. Yet . . .
Sometimes I sign up for a free trial of an enterprise product and, my god, if you give them your email after having expressed the tiniest interest, they will keep emailing you forever with escalatingly attention-grabby and entitled subject titles. (Like recruiters but much worse.) If I was being smart, I’d have a system which filters those emails, but I don’t, and so they are annoying. I don’t want to pattern match to that kind that of behavior.
Sometimes I think I won’t pattern match to that kind of spam because I’m different and my message is different, but then the rest of the LW team cautions me that such differences are in my mind but not necessarily the recipient tho whom I’m annoying.
I suspect as a whole they lean too far in the direction of avoiding being assholes while at the risk of not getting things done while I’m biased in the reverse direction. I suspect this comes from my previous most recent work experience being in the “business world” where ruthless, selfish, asshole norms prevail. It may be I dial it back from that but still end up seeming brazen to people with less immersion in that world; probably, overall, cultural priors and individual differences heavily shape how messaging behavior is interpreted.
So it’s hard. I try to judge on a case by case basis, but I’m usually erring in one direction or another with a fear in one direction or the other.
A heuristic I heard in this space is to message repeatedly but with an exponential delay factor each time you don’t get a response, e.g. message again after one week, if you don’t get a reply, message again after another two weeks, then four weeks, etc. Eventually, you won’t be bugging whoever it is.
Discussion of whether paying people to be able to send them emails or paying if they reply can solving the various bid-for-attention problems involved with emails.
Selected Aphorisms from Francis Bacon’s Novum Organum
I’m currently working to format Francis Bacon’s Novum Organum as a LessWrong sequence. It’s a moderate-sized project as I have to work through the entire work myself, and write an introduction which does Novum Organum justice and explains the novel move of taking an existing work and posting in on LessWrong (short answer: NovOrg is some serious hardcore rationality and contains central tenets of the LW foundational philosophy notwithstanding being published back in 1620, not to mention that Bacon and his works are credited with launching the modern Scientific Revolution)
While I’m still working on this, I want to go ahead and share some of my favorite aphorisms from is so far:
3. . . . The only way to command reality is to obey it . . .
9. Nearly all the things that go wrong in the sciences have a single cause and root, namely: while wrongly admiring and praising the powers of the human mind, we don’t look for true helps for it.
Bacon sees the unaided human mind as entirely inadequate for scientific progress. He sees for the way forward for scientific progress as constructing tools/infrastructure/methodogy to help the human mind think/reason/do science.
10. Nature is much subtler than are our senses and intellect; so that all those elegant meditations, theorizings and defensive moves that men indulge in are crazy—except that no-one pays attention to them. [Bacon often uses a word meaning ‘subtle’ in the sense of ‘fine-grained, delicately complex’; no one current English word will serve.]
24. There’s no way that axioms •established by argumentation could help us in the discovery of new things, because the subtlety of nature is many times greater than the subtlety of argument. But axioms •abstracted from particulars in the proper way often herald the discovery of new particulars and point them out, thereby returning the sciences to their active status.
Bacon repeatedly hammers that reality has a surprising amount of detail such that just reasoning about things is unlikely to get at truth. Given the complexity and subtlety of nature, you have to go look at it. A lot.
28. Indeed, anticipations have much more power to win assent than interpretations do. They are inferred from a few instances, mostly of familiar kinds, so that they immediately brush past the intellect and fill the imagination; whereas interpretations are gathered from very various and widely dispersed facts, so that they can’t suddenly strike the intellect, and must seem weird and hard to swallow—rather like the mysteries of faith.
Anticipations are what Bacon calls making theories by generalizing principles from a few specific examples and the reasoning from those [ill-founded] general principles. This is the method of Aristotle and science until that point which Bacon wants to replace. Interpretations is his name for his inductive method which only generalizes very slowly, building out slowly increasingly large sets of examples/experiments.
I read Aphorism 28 as saying that Anticipations have much lower inferential distance since they can be built simple examples with which everyone is familiar. In contrast, if you build up a theory based on lots of disparate observation that isn’t universal, you know have lots of inferential distance and people find your ideas weird and hard to swallow.
All quotations cited from: Francis Bacon, Novum Organum, in the version by Jonathan Bennett presented at www.earlymoderntexts.com
jimrandomh correctly pointed out to be that precision can have it’s own value for various kinds of comparison. I think he’s right. If A and B are each biased estimators of ‘a’ and ‘b’ but their bias is consistent (causing lower accuracy) but their precision is high, then I can make comparisons between A/a and B/b over time and between each other in ways I can’t even if the estimators were less biased but higher noise.
Still though it’s here is that the estimator is tracking a real fixed thing.
If I were to try to improve my estimator, I’d look at the process as a whole it implements and try to improve that rather than just trying to make the answer come out the same.
Posting some thoughts I wrote up when first engaging with the question for 10-15 minutes.
The questions is phrased as: How Can People Evaluate Complex Questions Consistently? I will be reading a moderate amount into this exact phrasing. Specifically that it’s specifying a project whose primary aim is increasing the consistency of answers to questions.
The projects strikes me as misguided. It seems to me definitely the case that consistency is an indicator of accuracy because if your “process” is reliably picking out a fixed situation in the world, then this process will give roughly the same answers as applied over time. Conversely, if I keep getting back disparate answers, then likely whatever answering process is being executed isn’t picking up a consistent feature of reality.
Examples: 1) I have a measuring tape and I measure my height. Each time I measure myself, my answer falls within a centimeter range. Likely I’m measuring a real thing in the world with a process that reliably detects it. We know how my brain and my height get entangled, etc.2) We ask different philosophers about the ethics of euthanasia. They give widely varying answers for widely varying reasons. We might grant that there exists on true answer here, but that the philosophers are not all using reliably processes for accessing that true answer. Perhaps some are, but clearly not all are since they’re not converging, which makes it hard to trust any of them.
Under my picture, it really is accuracy that we care about almost all of the time. Consistency/precision is an indicator of accuracy, and lack of consistency is suggestive of lack of accuracy. If you are well entangled with a fixed thing, you should get a fixed answer. Yet, having a fixed answer is not sufficient to guarantee that you are entangled with the fixed thing of interest. (“Thing” is very broad here and includes abstract things like the output of some fixed computation, e.g. morality.)
The real solution/question then is how to increase accuracy, i.e. how to increase your entanglement with reality. Trying to increase consistency separate from accuracy (even at the expense of!) is mixing up an indicator and symptom with the thing which really matters: whether you’re actually determining how reality is.
It does seem we want a consistent process for sentencing and maybe pricing (but that’s not so much about truth as it is about “fairness” and planning where we fear that differing amounts (sentence duration) is not sourced in legitimate differences between cases. But even this could be cast in the light of accuracy too: suppose there is some “true, correct, fair” sentence for a given crime, then we want a process that actually gets that answer. If the process actually works, it will consistently return that answer which is a fixed aspect of the world. Or we might just choose the thing we want to be entangled with (our system of laws) to be a more predictable/easily-computable one.
I’ve gotten a little rambly, so let me focus again. I think consistency and precision are important indicators to pay attention to when assessing truth-seeking processes, but when it comes to making improvements, the question should be “how do I increase accuracy/entanglement?” not “how do increase consistency?” Increasing accuracy is the legitimate method by which you increase consistency. Attempting to increase consistency rather than accuracy is likely a recipe for making accuracy worse because you’re now focusing on the wrong thing.
I had a look over Uncertain Judgements: Eliciting Experts’ Probabilities, mostly reading the through the table of contents and jumping around and reading bits which seemed relevant.
The book is pretty much exactly what the title says: it’s all about how to accurately get expert’s opinions, whatever those opinions might be (as opposed to trying to get experts to be accurate). Much probability/statistics theory is explained (especially Bayesianism) as well as a good deal of heuristics and biases material like anchoring-adjusting, affect heuristic + inside/outside view.
A repeated point is that experts, notwithstanding their subject expertise, are often not trained in probability and probabilistic thinking such that they’re not very good by default at reporting estimates.
Part of this is most people are familiar with probability only in terms of repeatable, random events that are nicely covered by frequentist statistics and don’t know how to give subjective probability estimates well. (The book calls subjective probabilities “personal probabilities”.)
A suggested solution is giving experts appropriate training, calibration training, etc. in advance of trying to elicit their estimates.
There’s discussion of coherence (in the sense of conforming to the basic probability theorems). An interesting point is that while it’s easy to see if probabilities of mutually exclusive events add up to greater than 1, it can harder to see if several correlations one believes in are inconsistent (say, resulting in a covariance matrix that isn’t positive-definite). Each believed correlation on its own can seem fine to a person even though in aggregate they don’t work.
Another interesting point is the observation is that people are good at reporting the frequency of their own observation of thing, but bad at seeing or correcting for the fact that sampling biases can affect what they end up observing.
On the whole, kinda interesting stuff on how to actually get experts actual true beliefs, but nothing really specifically on the topic of getting consistent estimates. The closest thing to that seems to be the parts on getting coherent probability estimates from people, though generally, the book mixes between “accurately elicit expert’s beliefs” and “get experts to have accurate, unbiased beliefs.”
Cool. Looking forward to it!
Sorry for the delayed reply on this one.
I do think we agree on rather a lot here. A few thoughts:
1. Seems there are separate questions of “how you model/role-models and heroes/personal identity” and separate questions of pedagogy.
You might strongly seek unifying principles and elegant theories but believe the correct way to arrive at these and understand these is through lots of real-world messy interactions and examples. That seems pretty right to me.
2. Your examples in this comment do make me update on the importance of engineering types and engineering feats. It makes me think that indeed LessWrong too much focuses only on heroes of “understanding” when there are heroes “of making things happen” which is rather a key part of rationality too.
A guess might be that this is down-steam of what was focused on in the Sequences and the culture that set. If I’m interpreting Craft and the Community correctly, Eliezer never saw the Sequences as covering all of rationality or all of what was important, just his own particular sub-art that he created in the course of trying to do one particular thing.
That’s my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
Seemingly answering is confused questions is more science-y than engineering-y and would place focus on great scientists like Feynman. Unfortunately, the community has not yet supplemented the Sequences with the rest of the art of human rationality and so most of the LW culture is still downstream of the Sequences alone (mostly). Given that, we can expect the culture is missing major key pieces of what would be the full art, e.g. whatever skills are involved in being Jeff Dean and John Carmack.
My perceived disagreement is more around how much I trust/enjoy theory for its own sake vs. with an eye towards practice.
About that you might be correct. Personally, I do think I enjoy theory even without application. I’m not sure if my mind secretly thinks all topics will find their application, but having applications (beyond what is needed to understand) doesn’t feel key to my interest, so something.
You’re looking at the wrong thing. Don’t look at the topic of their work; look at their cognitive style and overall generativity.
By generativity do you mean “within-domain” generativity?
Carmack is many levels above Pearl.
To unpack which “levels” I was grading on, it’s something like a blend of “importance and significance of their work” / “difficulty of the problems they were solving”, admittedly that’s still pretty vague. On those dimensions, it seems entirely fair to compare across topics and assert that Pearl was solving more significant and more difficult problem(s) than Carmack. And for that “style” isn’t especially relevant. (This can also be true even if Carmack solved many more problems.)
But I’m curious about your angle—when you say that Carmack is many levels above Pearl, which specific dimensions is that on (generativity and style?) and do you have any examples/links for those?
Seems you’re referring to this https://en.wikipedia.org/wiki/TRIZ?
A random value walks into a bar. A statistician swivels around in her chair, one tall boot unlaced and an almost full Manhattan sitting a short distance from her right elbow.
“I’ve been expecting you,” she says.
“Have you been waiting long?” respond the value.
“Only for a moment.”
“Then you’re very on point.”
“I’ve met enough of your kind that there’s little risk of me wasting time.”
“I assure you I’m quite independent.”
“Doesn’t mean you’re not drawn from the same mold.”
“Well, what can I do for you?”
“I was hoping to gain your confidence...”
Thanks for flagging that. We’re going to disable hover link-previews on mobile since they don’t work very well there; should be fixed soon. (And thanks for being opted into beta features.)
Aside: I approve of you messaging here since here was indeed a place you could reach us.
This is really interesting, I’m glad you wrote this up. I think there’s something to it.
Some quick comments:
I generally expect there to exist simple underlying principles in most domains which give rise to messiness (and often the messiness seems a bit less messy once you understand them). Perceiving “messiness” does also often feel to me like lack of understanding whereas seeing the underlying unity makes me feel like I get whatever the subject matter is.
I think I would like it if LessWrong had more engineers/inventors as role models and that it’s something of an oversight that we don’t. Yet I also feel like John Carmack probably probably isn’t remotely near the level of Pearl (I’m not that familiar Carmack’s work): pushing forward video game development doesn’t compare to neatly figuring what exactly causality itself is.
There might be something like all truly monumental engineering breakthroughs depended on something like a “scientific” breakthrough. Something like Faraday and Maxwell figuring out theories of electromagnetism is actually a bigger deal than Edison(/others) figuring out the lightbulb, the radio, etc. There are cases of lauded people who are a little more ambiguous on the science/engineer dichotomy. Turing? Shannon? Tesla? Shockley et al with the transistor seems kind of like an engineering breakthrough, and seems there could be love for that. I wonder if Feynman gets more recognition because as an educator we got a lot more of the philosophy underlying his work. Just rambling here.
A little on my background: I did an EE degree which was very practical focus. My experience is that I was taught how to do apply a lot of equations and make things in the lab, but most courses skimped on providing the real understanding that left me overall worse as an engineer. The math majors actually understood Linear Algebra, the physicists actually understood electromagnetism, and I knew enough to make some neat things in the lab and pass tests, but I was worse off for not having a deeper “theoretical” understanding. So I feel like I developed more of an identity as a engineer, but came to feel that to be a really great engineer I needed to get the core science better*.
*I have some recollection that Tesla could develop a superior AC electric system because he understood the underlying math better than Edison, but this is a vague recollection.
Someone smart once made a case like to this to me in support of a specific substance (can’t remember which) as a nootropic, though I’m a bit skeptical.
Hypothesis that becomes very salient from managing the LW FB page: “likes and hearts” are a measure of how much people already liked your message/conclusion*.
*And also like how well written/how alluring a title/how actually insightful/how easy to understand, etc. But it also seems that the most popular posts are those which are within the Overton window, have less inferential distance, and a likable message. That’s not to say they can’t have tremendous value, but it does make me think that the most popular posts are not going to be the same as the most valuable posts + optimizing for likes is not going to be same as optimizing for value.
**And maybe this seems very obvious to many already, but it just feels so much more concrete when I’m putting three posts out there a week (all of which I think are great) and seeing which get the strongest response.
***This effect may be strongest at the tails.
****I think this effect would affect Gordon’s proposed NPS-rating too.
*****I have less of this feeling on LW proper, but definitely far from zero.