# How confident are you in the Atomic Theory of Matter?

How much confidence do you place in the scientific theory that ordinary matter is made of discrete units, or ‘atoms’, as opposed to being infinitely divisible?

More than 50%? 90%? 99%? 99.9%? 99.99%? 99.999%? More? If so, how much more? (If describing your answer in percentages is cumbersome, then feel free to use the logarithmic scale of decibans, where 10 decibans corresponds to 90% confidence, 20 to 99%, 30 to 99.9%, etc.)

This question freely acknowledges that there are aspects of physics which the atomic theory does not directly cover, such as conditions of extremely high energy. This question is primarily concerned with that portion of physics in which the atomic theory makes testable predictions.

This question also freely acknowledges that its current phrasing and presentation may not be the best possible to elicit answers from the LessWrong community, and will be happy to accept suggestions for improvement.

Edit: By ‘atomic theory’, this question refers to the century-plus-old theory. A reasonably accurate rewording is: “Do you believe ‘H2O’ is a meaningful description of water?”.

It is more likely that

I have misread the question completely— that it is actually about the smell of blue cheese, the color of dinosaur feathers, or some other thing — than that atomic theory is false.Indeed, it is more likely that

I have hallucinated Less Wrongthan that atomic theory is false.Because if atomic theory is false, then none of chemistry, electronics, etc. make any sense.

Therefore, so far as

Iam capable of caring, atomic theory is certain; I can go into solipsistic crisis on less evidence than I can seriously doubt it.Might you be interested in offering your confidence levels on how likely it is that you /have/ ‘misread the question completely’, or ‘hallucinated Less Wrong’?

Less than one in seven billion.

You are

wayoverconfident in your own sanity. What proportion of humans experience vivid, detailed hallucinations on a regular basis? (not counting dreams)Oh, sure — that I’m hallucinating

somethingis much more likely.But some of those disorders take the form of agreeing with anything an interlocutor says—confabulating excuses or agreement.

Hence, the question isn’t

just‘what is the odds that I’m hallucinating something && that something is the atomic theory of matter’, it is also ‘what is the odds that I’m confabulating agreement to anything anyone asks, which now that I’ve looked at this post, includes being asked how much I believe in atomic theory.’(Very much, thank you! I believe in atomic theory because my dog is green, obviously. Why yes doctor, I’m sure I haven’t changed my mind since the last visit. Why would I do something like that?)

Well, that’s a good point. I arrived at the above figure by starting with the base rate of schizophrenia and updating based on its frequency among people who are homeless or otherwise immiserated, etc., as versus the general population. At the very least, it seems that having significant hallucinations seems to usually go along with a lot more

being shouted atthan I experience. Perhaps it’s just that I’m well-off financially and people are likely to humor my delusions ….I wouldn’t pick schizophrenia as a base-rate.

As far as I know, schizophrenics are usually aware of their unusual mental/epistemological status, so your thinking yourself normal (as I assume you do) screens off schizophrenia; another issue with schizophrenia is that I’ve read that schizophrenic hallucination are almost always confined to one modality—you hear someone behind you but if you look you see nothing, I imagine is how it works—so here too you would become aware of glitches or inconsistencies in your senses.

What you want is the situation where you are unaware of any issues

andhave an issue; which is where confabulating comes in handy, because some of them are just like that and will confabulate that they’re not confabulating.Do these people also agree that there must be something wrong with them, if that is proposed?

Are they frequently allowed to roam freely without impediment, and in particular on the internet?

I think that combining enough mental disorders to explain my life as hallucination would be a pretty hefty conjunction.

Not sure. Probably some would, some wouldn’t.

It would explain a lot, wouldn’t it?

Fair enough, and an interesting number in its own right.

If my math is right, that works out to about 99 decibans, or about 33 bits.

It occurs to me that there are several entirely different types of things which would indicate “Atomic Theory may be Wrong.”

1: You look into an Electron Microscope, expecting to see Atoms. You see Infinitely divisible Swiss Cheese spelled out into the shape of the word “Nope.”

2: You use an ultra accurate scale, expecting to weigh something at 1.255000001. It weighs 1.255000002. Because the integral weight of the subatoms, subsubatoms, subsubsubatoms etc, causes your measurement to be off.

3: I am inexplicably transported to “Fantasia.” Atomic theory doesn’t work here because Fantasia is constructed from strange matter, namely infinitely divisible magicons, but it does still work on Earth, which is in fact still made of ordinary matter.

4: I am in the middle of looking into a Microscope which is broadcasting onto a TV. everyone is suprised to note that at 12:00 noon exactly, the Atoms suddely appeared to be more divisible than they were before.

5: Omega shows up and I am informed that I am in a simulation which does not actually use real world physics. Instead, it uses simplified physics.

Ergo, if Atomic Theory is invalid, it brings up the question of how it is invalid:

Does it work on everything except this thing? Does it approximately work on everything, but doesn’t exactly work? Does it work on everything in a particular area, but not in other areas? Does it work at some times and not at others? Does it only work because it is the rules of the simulation?

This is important, because for instance, If there’s a 1% chance that I’m in a Omega simulation, and a 1% chance that an Omega simulation is not being run using the real worlds physics, then even if I was 100% confident that Atomic Physics described my simulation (which I shouldn’t be), my confidence that the Atomic theory correctly describes the real world shouldn’t exceed 99.99%, if I’ve been doing the math correctly.

So I feel like the answer is, my number doesn’t describe anything without some kind of shared context between me and the person.

At least+40 dB. “Next Thursday I will not meet a funeral procession on my way home from the bus stop” is somewhere between +35 dB and +40 dB (given that such a thing has happened to me exactly once that I remember of), and I’m more confident that ordinary matter is made of atoms than that.Welp—it seems that I definitely managed to phrase my question poorly. So I’ll try again:

By ‘atomic theory’, I’m referring to the century-plus-old theory that ordinary stuff, like trees and pencils and animals, are made out of something like particles, rather than matter being a ‘stuff’ that you can just keep on dividing into smaller and smaller pieces. I’m referring to the theory that’s the foundation of chemistry.

Put yet another way: “Do you think that ‘H2O’ is a meaningful description of water?”

Seeing as I work every day with individual DNA molecules which behave discretely (as in, one goes into a cell or one doesn’t), and on the way to my advisor I walk past a machine that determines the 3D molecular structure of proteins… yeah.

This edifice not being true would rely on truly convoluted laws of the universe that emulate it in minute detail under every circumstance I can think of, but not doing so under some circumstance not yet seen. I am not sure how to quantify that, but I would certainly never plan for it being the case. >99.9? Most of the 0.1% comes from the possibility that I am intensely stupid and do not realize it, not thinking that it could be wrong within the framework of what is already known. Though at that scale the numbers are really hard to calibrate.

Alright, to try and make calibration easier, how about this thought experiment—which do you think would be more likely: that if you bought a random ticket, you’d then win the grand prize of a 1-in-a-1,000 lottery; or that atomic theory will be proven false? At what point does the odds of the lottery ticket seem to start coming close to the odds of falsifying atomic theory?

I think a more plausible scenario for the atomic theory being wrong would be that the scientific community—and possibly the scientific method—is somehow fundamentally borked up.

Humans have come up with—and become strongly confident in—vast, highly detailed, completely nowhere-remotely-near-true theories before, and it’s pretty hard to tell

from the insidewhether you’re the one who won the epistemic lottery. They allthinkthey have excellent reasons for believing they’re right.Please elucidate the purpose of this question.

I’m trying to get at least a rough approximation of the upper bound of confidence that LWers place on an idea that seems, to me, to be about as proven as it’s possible for an idea to be.

It’s funny how many people sidestep the question.

Why not something like the probability of 2 + 2 = 4? Surely that’s more certain than any vague definition of “Atomic Theory of Matter.”

Mainly, because estimating the accuracy of a math statement brings in various philosophical details about the nature of math and numbers, which would likely distract from the focus on theories relating to the nature of our universe. So I went for the most foundational physical theory I could think of… and phrased it rather poorly.

If you have a suggestion on how to un-vague-ify my main post, I’d be happy to read it.

Except it doesn’t—whether or not we can know those philosophical details are conditional on the accuracy of human hardware, which as far as I can tell is what you want people to estimate.

For many of the obvious ways to pose the question, atomic theory is already false—multiparticle states are the real building blocks, and you can do pretty unusual things with them if you try hard enough. I think the most sensible thing to ask about is sudden failure of properties that we normally ascribe to atomic theory, like ratios working in chemical reactions or quantum mechanics predicting NMR spectra. In which case, I’d need said failures to replicate to be as good as the supporting evidence, or propose a simple-ish mechanism, like “we’re in a simulation and it’s just changed.” Taking that as my lowest standard, I’d be satisfied with a standard star-blinking pattern, or maybe a pi-message, the usual sort of thing, with null probability with an exponent somewhere in the −25 range.

1-epsilon. It is more likely that the atomic description of the universe is generally accurate (although it does break down at the subatomic level) than that I am answering a question about it.

In that case—could you offer anything like an order-of-magnitude of how large epsilon is for your answering of such questions?

(If I can’t get an estimate of epsilon for atomism, I’ll be happy for an estimate of epsilon for the estimate… or however many levels it takes to get /some/ estimate.)

99.99998754%.

Actually, I don’t know. The number above has been determined by both my credence in the atomic theory and the shape of my keyboard, anchoring and perhaps many other biases; the biases could have driven the number far away from its true value. But I dislike dodging questions about probability and believe that disagreements would be easier to resolve if people shared their probabilities when asked, so I provide an answer which is as good as I can make it.

I believe that there is a useful reformulation of similar questions:

how many bits of evidence against the atomic theory would make you to doubt the theory on the level of approximately 50% confidence?People are much better at evaluationg the strength of their beliefs when they are an about that level, so this question could be in principle settled experimentally. Of course, people aren’t ideal Bayesians and the persuasive power of evidence would depend on the way it is presented, perhaps a sufficiently clever demagogue can talk me out of my belief in the atomic theory without any evidence at all, so the approach has its limits. But it still seems that “how much evidence would it take to change my opinion” is an easier question to tackle directly than “what’s the probability that I am right”. Unfortunately, I can’t answer that for the case of the atomic theory without some hard thinking, and I am lazy to think about it hard just now.The idea of assigning a probability to such a thing might be, I think, what Nassim Nicholas Taleb calls the “Ludic fallacy” (see http://en.wikipedia.org/wiki/Ludic_fallacy). Alternatively, as I see it, to do such a thing, we need to start with some sort of paradigm by which we know how to set probabilities and do something useful with them. Poker is a good example, and not coincidentally, “Ludic” is from the Latin ludus, meaning “play, game, sport, pastime.” It is no accident that so many introductory examples in statistics are about things like coin tossing.

Can we start by asking “Of all the universes I’ve known, in how many of them does the Atomic Theory apply?” Maybe this is frequentist, and Bayes will offer some plausible approach, but does anyone have a clue how to attack it that way?

Taleb can seem like a curmudgeonish contrarian at times, but at other times he seems to me at least like he’s onto some deep ideas. I need to read and think a lot more before I can possibly make up my mind, but at least feel very motivated to do just that.

The paradigm I’m currently looking at this is, generally, the accumulation of evidence over long periods. In the year 1800, not even Dalton had published his (wrong) results about the mass of oxygen; there was no particular evidence /to/ believe in the atomic theory.. In 1900, Einstein had yet to publish his work on Brownian motion; there was still a small but reasonable possibility that somebody would come up with a non-atomic theory that made better predictions. In 2000, atomic theory was so settled that few people even bothered calling it a ‘theory’ anymore. At any given point during those two centuries, a certain amount of evidence would have been collected relating to atomic theory, and it would have been reasonable to have different levels of confidence in it at different times. In the present-day, the possibility that atomic theory is false is about as small a probability as anyone is likely to encounter—so if I can work out ideas that cover probability estimates that small, then it’s probably (ahem) safe to assume they’ll be able to cover anything with greater probability.

Or maybe I’m wrong. In which case I don’t know a better way to find out I /am/ wrong than to work through the same thing, and come across an unresolvable difficulty.

Well that’s a very commendable attitude, seriously.

How confident are you in the existence of cats?

This is a colossal waste of time.

If you don’t mind the question: How confident are /you/ in the existence of cats?

The reason I ask, is that I’m still trying to train myself to think of probability, evidence, confidence, and similar ideas logarithmically rather than linearly. 10 decibans of confidence means 90% odds, 20 decibans means 99%, 30 means 99.9%, and so on. However, 100% confidence in any proposition translates to an infinite number of decibans—requiring an infinite amount of evidence to achieve. So far, the largest amount of confidence given in the posts here is about 100 decibans… and there is a very large difference between ’100 decibans’ and ‘infinite decibans’. And that difference has some consequences, in terms of edge cases of probability estimation with high-value consequences; which has practical implications in terms of game theory, and thus politics, and thus which actions to choose in certain situations. While the whole exercise may be a waste of time for you, I feel that it isn’t for me.

How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?

Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.

I’m still not sure how I’m supposed to interpret this question. If you’re asking whether I think “matter is made up of atoms” is an extremely useful working hypothesis for many many scientific purposes, then the answer is obviously “yes” with probability that only negligibly differs from 1. (ETA: Don’t ask me how negligibly different, because I couldn’t tell you. I am not enough of an ideal Bayesian that I can meaningfully attach probabilities with many significant digits to my beliefs.)

If you’re asking whether the fundamental structure of matter is in fact discrete, I would assign that a probability of about 0.3. Quantum field theory is sometimes interpreted as a particle theory, but this seems wrong to me. It is best interpreted as telling us that the basic constituents of nature are continuous field configurations (or, to be more precise, linear superpositions of field configurations).

Particle number is not fixed in any workable relativistic quantum field theory. This strongly suggests that particles are emergent rather than fundamental. If you suppose that a a particle cannot be located at two disjoint regions in a single space-like hyperplane, then any relativistic quantum theory of a fixed number of particles predicts a zero probability of finding a particle anywhere (see here for the proof). So the only consistent particle QFT is one where there are no particles!

There’s also the fact that an observer accelerating uniformly in a Minkowski vacuum will see a thermal bath of particles (the Unruh effect). If one can bring particles in or out of existence simply by a change of reference frame, then they shouldn’t be part of one’s fundamental ontology.

Of course QFT itself is in all probability not the right fundamental theory, so matter may still turn out to have discrete constituents.

That’s the general answer I’m aiming to evoke; I’m trying to get a better idea of just how big that ‘negligibly’ is.

Like I said in my edit, I can’t give you a precise answer, but to narrow it down a bit, I’m comfortable saying that the probability is higher than 1 − 10^(-9).

Really? What’s the probability that a human can even be in an epistemic state that would justify 30 bits of belief?

About the same as the probability that a human can be in a physical state that allows them to walk. Winners of a 100 million-to-one lottery overcome a prior improbability of 10^-8, and presumably are at least as certain they have won, once they have collected, as they were previously expecting to lose, so there’s somewhere above 10^16 of updating, 160 decibans, 53 bits. And ordinary people do it. If you’re so smart you can’t, there’s something wrong with your smartness.

What strikes you as implausible about 30 bits of belief? It takes more than 30 bits to single out one individual on this planet.

So all we need is an example of a universe without atoms (corresponding to the example of someone who

didwin the lottery despite the improbability of doing that) for this analogy to work.I think there are fields of thought in which the best paradigm is that something either is or isn’t, and where probabalistic thinking will do no good, and if forced or contrived to seem to work, may do harm (e.g. the models by which Wall Street came up with a plausible—to some—argument that CDSs of subprime mortgages could be rated AAA).

And there are fields of thought in which the idea that something

simply is or isn’tis the thing likely to mislead or do harm (see http://en.wikipedia.org/wiki/Interesting_number where one gets into trouble by thinking a number eitherisorisn’t“interesting”).The “interesting number” business isn’t probabalistic either, though there may be some usefulness, in Baysian arguments that treat subjective “levels of certainty” like probabilities.

Note that probabilities like that cannot be estimated because they are at the noise level. For example, the odds are about the same that you are delusional and no one asked this question (i.e., the odds are tiny and hard to evaluate).

What level of confidence is high (or low) enough that you would feel means that something is within the ‘noise level’?

Depending on how smart I feel today, anywhere from −10 to 40 decibans.

(edit: I remember how log odds work now.)

Why is it important to quantify that value?

The most important reason I can think of: the largest number of decibans that’s yet been mentioned is 160 (though that’s more of a delta, going from −80 to +80 decibans); the highest actual number of decibans is around 100. This gives me reasonably good confidence that if any practical rules-of-thumb involving decibans I come up with can handle, say, from −127 to +127 decibans (easily storable in a single byte), then that should be sufficient to handle just about anything I come across, and I don’t have to spend the time and effort trying to extend that rule-of-thumb to 1,000 decibans.

I’m also interested in finding out what the /range/ of highest decibans given is. One person said 50; another said 100. This gives an idea of how loosely calibrated even LWers are when dealing with extreme levels of confidences, and suggests that figuring out a decent way /to/ calibrate such confidences is an area worth looking into.

An extremely minor reason, but I feel like mentioning it anyway: I’m the one and only DataPacRat, and this feels like a shiny piece of data to collect and hoard even if it doesn’t ever turn out to have any practical use.

“Not all data is productively quantifiable to arbitrary precision.” Hoard that, and grow wiser for it.

Indeed not.

In this particular case, can you recommend any better way of finding out what the limits to precision actually are?

I think how confident you can be in the math of your own confidence and the laws of probability, according to your priors of the laws of probability, is pretty much as far down as you can go.

Then this confidence prior in the laws of probability lets you apply it to your memory and do conjunction/disjunction math on the many instances of something turning out “correct”, so you get a reliability of memory given a certain amount of memory datapoints and a certain reliability of probabilities.

Then you kind of keep doing more bayes, building layer after layer relying on the reliability of your own memory and the applicability of this whole process altogether.

That seems like an acceptable upper bound to me.

And yes, that’s probably rather equivalent to saying “What are your priors for Bayes, Occam, laws of probability and your memory all correct and functional? There, that’s your upper bound.”

Would you care to offer any estimates of /your/ priors for Bayes, etc? Or what your own inputs or outputs for the overall process you describe might be?

I haven’t calculated the longer version yet, but my general impression so far is that I’m around the ~60 deciban mark as my general upper bound for any single piece of knowledge.

I’m not sure I’m even capable of calculating the longer version, since I suspect there’s a lot more information problems and more advanced math required for calculating things like the probability distributions of causal independence over individually-uncertain memories forged from unreliable causes in the (presumably very complex) causal graph representing all of this and so on.

I can’t. But that sounds like a more useful question!

Main evidence: Universal belief (including experts), coherence of theory, directly testable implications that I deal with on a day-to-day basis (fuel cells), lots more directly testable predictions (brownian motion, all the rest of chemistry, thermodynamics, solid and fluid mechanics...)

Let’s put it this way, if I encountered the kind of evidence that would be required to cast it into doubt, I’d have to revise my belief in pretty much everything. Cartesian demon level.

Maybe 15 bits? Mostly metauncertainty.

EDIT: I feel profoundly unqualified as a mere human to be tossing around that kind of certainty, but I’m going to leave it.

for ‘The Epicurean Truth’ namely that matter works on simple computable rules and everything actually perceptible to humans is emergent: Certainty with an epsilon-sized crack (connected to sudden doubt of everything whatsoever)

For what you actually asked about: I’d say its almost the same, maybe over a hundred decibans.

Querying my expectations I find that my expectations of reality aren’t at all related to verbal statements like “The Atomic Theory of Matter” even if my verbal centers want to endorse it strongly.

I currently believe that the atomic theory of matter is a descriptive, predictive theory relating to the sorts of objects physicists have categorized and the sorts of tools that exist.

Asking “How confident are you that there are discrete units” feels like asking “how confident are you that jumping on one goomba’s head in mario gives you 100 points” (or whatever, it has been a while.)

The answer is, if you upload the code for the game and hack it then the question of points stops being relevant.

So the translation of what my expectations is is that I believe that “elementary particles” are indivisible insofar as the theory of elementary particles treats them as particles. But if everything is probability distributions further down, then I don’t even know what it means to say that something is indivisible. Does it mean that the distribution emerges from an indecomposable, finite-dimensional representation? In that case these distributions will have an atomic theory, but I don’t think that will actually be true, and it may be better at a smaller scale to switch analogies to something where the previous idea of atomic theory no longer makes any sense at all.

I guess that puts me at about −10 decibans; I don’t see an a priori reason that probability distributions should be atomic, and I can imagine them being fundamental. And if they aren’t, that adds to the probability of our formalisms being overturned over and over again.

I live my life under the assumption that it is correct, and I do not make allowances in my strategic thinking that it may be false. As for how hard it would be to convince me I was wrong, I am currently sufficiently invested in the atomic theory of matter that I can’t think, off-hand, what such evidence would look like. But I presume (hope) that a well-stated falsifiable experiment which showed matter as a continuum would convince me to become curious.

One viewpoint I’ve learned from the skeptical community is that individual experiments have very little value—an experiment with a stated p-value of 0.05 actually has more than a 1-in-20 chance of being wrong. Collections of experiments, however, from a whole field of research, that can provide some valuable evidence; for example, 400 different experiments, of which around 370 lean in one direction and 30 in the other, and there’s a noticeable trend that the tighter the experiment, the more likely it is to lean in the majority direction.

What I’m currently trying to wrestle with is that if there are 400 experiments, then even restating their p-values in terms of logarithmic decibans, you can’t /just/ add all that evidence up. At the least, there seems to be a ceiling, based on the a-few-in-a-billion odds of extreme mental disorder. I’m currently wondering if a second-order derivative for evidence might be in order—eg, take decibans as a linear measure and work with a logarithm based on that. Or, perhaps, some other transformation which further reduces the impact of evidence when there’s already a lot of it.

A larger obstacle to adding them up is that 400 experiments are never going to be independent. There will be systematic errors. Ten experiments in ten independent laboratories by ten independent teams, all using the same, unwittingly flawed, method of measuring something, will just give a more precise measurement of a wrong value.

How do people conducting meta-analyses deal with this problem? A few Google searches showed the problem being acknowledged, but not what to do about it.

I doubt there’s a general solution open to the meta-analyst, since estimating systematic error requires domain-specific knowledge.

I would expect meta-analysis to require such knowledge anyway, but I don’t know if this is what happens in practice. Are meta-analyses customarily done other than by experts in the field?

Ideally a meta-analyst would have domain-specific knowledge, but the process of doing a basic meta-analysis is standardized enough that one can carry it out without such knowledge. One just needs to systematically locate studies, extract effect sizes from them, and find various weighted averages of those effect sizes.

Good point. Most meta-analyses

aredone by people in the field, although I’m not sure whether they’re typically experts in the specific phenomenon they’re meta-analyzing.Thinking about it, maybe the problem’s a simple one: estimating systematic errors is really hard. I’ve seen it done occasionally for experimental physics papers, where authors can plausibly argue they’ve managed to pinpoint all possible sources of systematic error and account for them. But an epidemiologist meta-analyzing observational studies generally can’t quantify confounding biases and analogous sources of systematic error.

My own impression has been this as well: if you already understand your basic null-hypothesis testing, a regular meta-analysis isn’t

thathard to learn how to do.Do you have any materials on epidemiological meta-analyses? I’ve been thinking of trying to meta-analyze the correlations of lithium in drinking water, but even after a few days of looking through papers and textbooks I still haven’t found any good resources on how to handle the problems in epidemiology or population-level correlations.

Not to hand. But (as you’ve found) I doubt they’d tell you what you want to know, anyway. The problems aren’t special epidemiological phenomena but generic problems of causal inference. They just bite harder in epidemiology because (1) background theory isn’t as good at pinpointing relevant causal factors and (2) controlled experiments are harder to do in epidemiology.

If I were in your situation, I’d probably try running a sensitivity analysis. Specifically, I’d think of plausible ways confounding would’ve occurred, guesstimate a probability distribution for each possible form of confounding, then do Monte Carlo simulations using those probability distributions to estimate the probability distribution of the systematic error from confounding. This isn’t usually that satisfactory, since it’s a lot of work and the result often depends on arsepulls.

But it’s hard to do better. There are philosophers of causality out there (like this guy) who work on rigorous methods for inferring causes from observational data, but as far as I know those methods require pretty strong & fiddly assumptions. (IlyaShpitser can probably go into more detail about these methods.) They also can’t do things like magically turn a population-level correlation into an individual-level correlation, so I’d guess you’re SOL there.

I’ve found that there’s always a lot of field-specific tricks; it’s one of those things I really was hoping to find.

Yeah, that’s not worth bothering with.

The really frustrating thing about the lithium-in-drinking-water correlation is that it would be very easy to do a controlled experiment. Dump some lithium into some randomly chosen county’s water treatment plants to bring it up to the high end of ‘safe’ natural variation, come back a year later and ask the government for suicide & crime rates, see if they fell; repeat

ntimes; and you’re done.I’m interested for generic utilitarian reasons, so I’d be fine with a population-level correlation.

Hmm. Based on the epidemiology papers I’ve skimmed through over the years, there don’t seem to be any killer tricks. The usual procedure for non-experimental papers seems to be to pick a few variables out of thin air that sound like they might be confounders, measure them, and then toss them into a regression alongside the variables one actually cares about. (Sometimes matching is used instead of regression but the idea is similar.)

Still, it’s quite possible I’m only drawing a blank because I’m not an epidemiologist and I haven’t picked up enough tacit knowledge of useful analysis tricks. Flicking through papers doesn’t actually make me an expert.

True. Even though doing experiments is harder in general in epidemiology, that’s a poor excuse for not doing the easy experiments.

Ah, I see. I misunderstood your earlier comment as being a complaint about population-level correlations.

I’m not sure which variables you’re looking for (population-level) correlations among, but my usual procedure for finding correlations is mashing keywords into Google Scholar until I find papers with estimates of the correlations I want. (For this comment, I searched for “smoking IQ conscientiousness correlation” without the quotes, to give an example.) Then I just reuse those numbers for whatever analysis I’d like to do.

This is risky because two variables can correlate differently in different populations. To reduce that risk I try to use the estimate from the population most similar to the population I have in mind, or I try estimating the correlation myself in a public use dataset that happens to include both variables and the population I want.

You never try to meta-analyze them with perhaps a state or country moderator?

I misunderstood you again; for some reason I got it into my head that you were asking about getting a point estimate of a secondary correlation that enters (as a nuisance parameter) into a meta-analysis of some primary quantity.

Yeah, if I were interested in a population-level correlation in its own right I might of course try meta-analyzing it with moderators like state or country.

Somewhere between 0 decibans and 50 decibans, depending on the question you are actually trying to ask. If you’re asking “Is there some way to interpret QM equations without invoking the concept of atoms,” I’d give that about

^{50}⁄_{50}, because I don’t actually know QM and I could see that going either way. If your question is instead “How confident are you that you will continue to make observations consistent with atomic theory for the rest of your life,” I’m quite confident that I won’t see anything obviously falsifying the existence of, say, hydrogen.For the “’H2O is a meaningful description of water” question, the same problem comes up: most of the scenarios in which it’s not are scenarios in which this was a trick question, because this pattern-matches really strongly as a trick question. My estimate of the probability that I missed something rarely drops below 10% when communicating through asynchronous text with another human.

I’m not trying to deal with the complications of quantum mechanics; and I’m not trying to ask a trick question. (I’m just nowhere near as good at expressing myself as I wish I was.) At the end of your first paragraph, you come close to answering my question—all I’m hoping is to get more of a quantification of how confident you are

When answering a question like that, most of my uncertainty is uncertainty about my interpretation of the question rather than my expectations about the world. I expect my cells to continue functioning and my computer to continue working (p> 80 decibans and 70 decibans respectively), if I mix HCl and NaOH I still expect to get saltwater (p > 60 decibans), and if I shoot a beam of ionized Hydrogen atoms through a magnetic field in a vaccuum, I expect that the beam will curve with a specific radius. If I replace the ionized Hydrogen with ionized Lithium and keep all other factors the same, I expect that radius to approximately seven times that of the beam ofHydrogen atoms (with a few atoms curving at a radius of 6 times that of the Hydrogen if they’re some Lithium-6 mixed in) (p > 60 decibans). I expect nuclear fission to continue working, and so expect that some atoms will be divisible into multiple smaller atoms, though that manifests as a belief that nuclear power plants will keep working (p > 60 decibans).

On the other hand, I expect that I understand your question, but not with anywhere near the level of certainty as I have that I will continue to see the physical processes above operating as I’ve observed them to operate in the past. And even with you saying it’s not a trick question, I’m more certain that the predictions made by atomic theory will provide an accurate enough description of reality for my purposes than that this isn’t a trick question. And I’m almost positive this isn’t a trick question.

That’s about as ideal a response as I could ask for; thank you kindly for taking the time to write it out.

Since you’re trying to put numbers on something which many of us regard as being certainly true, I’ll take the liberty of slightly rephrasing your question.

How much confidence do I place in the scientific theory that ordinary matter is not infinitely divisible? In other words, that it is not true that no matter how small an amount of water I have, I can make a smaller amount by dividing it?

I am (informally) quite certain that water is not infinitely subdivisible. I don’t think it’s that useful an activity for me to try to put numbers on it, though. The problem is that in many of the more plausible scenarios I can think of where I’m mistaken about this, I’m also barking mad, and my numerical ability seems as likely to be affected by that as my ability to reason about atomic theory. I would need to be in the too crazy to know I’m crazy category—and probably in the physics crank with many imaginary friends category as well. Even then I don’t see myself as being in a known kind of madness to be that wrong.

The problem here is that I can reach no useful conclusions on the assumption that I am that much mistaken. The main remaining uncertainty is whether my logical mind is fundamentally broken in a way I can neither detect nor fix. It’s not easy to estimate the likelihood of that, and it’s essentially the same likelihood for a whole suite of apparently obvious things. I neglect even to estimate this number as I can’t do anything useful with it.

To not avoid the question, I say 99.9999%. And while saying this, I also note that I am not really good at estimating probabilities with this preciseness.

By the way, you should make a poll. With options e.g. “less than 99%”, “99%”, “99.9%” … “99.9999999%”, “more than “99.9999999%”, “1″.

Now that I’ve gotten an initial range of estimates here—from around 30 decibans (10 bits, 1 in a thousand) to a hundred decibans (33 bits, 1 in 10 billion), I just might do something along those lines… or maybe I’ll try drafting out a new post (maybe even for Main) that doesn’t have all the flaws of this one, such as ‘How confident can you be?’, including such a poll.

Q: How confident are you in Newton’s law of universal gravitation?

A: Depends on the conditions.

I’m a little confused by this question. In my experience, atomic theory always refers to atoms. I think you’re really asking whether quarks and such are divisible. I’m confident that there is no substructure to elementary particles, but I won’t give a number.

I did, indeed, phrase the question poorly. Yes, I’m asking about the atomic theory referring to atoms (whatever they themselves may be made of).

Oh. That one is so close to 1 that there’s no use even discussing it anymore.

If that’s how you feel… then when would you say that confidence in atomic theory /got/ close enough to 1 that there was no more use in discussing it?

“All the results of atomic theory are down to random chance, and it’s a useless model” [~0%]

“Atomic theory is the absolute best possible model for reality, and handles all edge cases ideally” [~0%]

I’d say 40 decibans is my minimum credence for ANY major established theory, simply due to the possibility of procedural error or unknown factors in any attempted disproof. i.e. I would assume that there’s been at LEAST 10,000 erroneous results that purported to disprove the theory.

Alternately, if we assume there are at least a million scientists, and they average 1 test of the theory each, that gives us ~60 decibans.

If we take a common behavior that most of the world has done at least once, that gives us ~90 decibans

Per Korzybski, I’d say that whatever you say matter is, it is not. Modeling matter as coming in discrete configurations works quite well, though we’ve seen that there are exceptions where the discrete configurations can be reconfigured if properly physically perturbed. I wouldn’t be surprised if there are tons more stable configurations than we’ve seen or created so far, and even some that no longer model well as “discrete”.

Atoms are divisble. It possible to believe that matter is made of atoms and matter is infinitely divisible.

What the heck does

meaningfulmean? While we are at seeking meaning: “What’s the meaning of life?” I don’t think that the word makes specific predictions that would allow you to put a confidence rating on the claim being true.It’s very important to mentally distinguish different classes of statements. If you start giving claims that make no predictions confidence levels you mess up your way of thinking precisely about the world.

The map is not the territory. Asking for the confidence in the belief that the map is the territory is wrong on a fundamental level.