Indeed, words don’t mean things on their own, people use words to mean things. But with enough shared context, it’s a reasonable approximation to say that words mean things. Up until they don’t. Scott A expressed it rather well when discussing whether whale is a fish. https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/
Active listening is always a good start. One does not need to agree or express any opinion whatsoever, just empathize with the other person, by restating what they say in your own words, asking/naming the feelings they might have, and making them feel understood. In your example:
more gun control; Y = less gun control; R = people unable to defend themselves and having their rights taken away; H = increased risk of mass shootings, suicides, and children shooting themselves or others.
“I can see why gun control is important to you. Mass shootings, suicides and accidental deaths are terrible, and you are making a good point that easy availability of guns leads to more of these awful events. And you are saying that more gun control would make it harder to get guns and lower the odds of someone using them, especially accidentally or impulsively. Is that what you are saying? Please correct me if I missed or misstated something.”
“The potential future you are describing, where law-abiding citizens are unable to defend themselves from armed criminals, or worse are unable to resist when their rights taken away by the government agencies, does sound pretty scary. Looks like your point is that more gun control would be a step toward such a future, and you find this possibility terrifying. Is this a fair summary?”
Before you can logic with someone, they need to feel safe with you emotionally. This applies to most people, whether aspiring rationalists or not. Active listening is a good way to cross this emotional distance. Your own views and opinions can be expressed after, and do not have to be forceful, more like a point to bring up and ask them to consider it and the arguments they can help you evaluate. There is no guarantee of them changing their mind, or you changing yours, or any convergence whatsoever, but at least you will remain friendly and can go for a beer together after.
My point, as usual not well articulated, is that the question “how to fix things?” is way down the line. First, the apparent “distortonary dynamics” may only be an appearance of one. The situation described is a common if metastable equilibrium, and it is not to me whether it is “distortionary” or not. So, after the first impulse to “fix” the status quo passes, it’s good to investigate it first. I didn’t mean to suggest one *should* take advantage of the situation, merely to investigate if one *could*. Just like in one of Eliezer’s examples, seeing a single overvalued house does not help you profit on it. And if there is indeed a way to do so, meaning the equilibrium is shallow enough, the next step would be to model the system as it climbs out of the current state and rolls down one of many possible other equilibria. Those other ones may be even worse, but the metric applied to evaluate the current state vs the imaginary ideal state. A few examples:
Most new businesses fail within 3 years, but without new aspired entrepreneurs having a too rosy estimate of their chances for success (cf the Optimism bias mentioned in the OP) there would be a lot fewer new businesses and everyone would be worse off in the long run.
Karl Marx was calling for the freedom of the working class through revolution, but any actual revolution makes things worse for everyone, including the working class, at least in the short to medium run (years to decades). If anything, history showed that incremental evolutionary advances work a lot better.
The discussed Potemkin Villages, in moderation, can be an emotional stimulus for people to try harder. In fact, a lot of the fake statistics in the former Soviet Union served that purpose.
Hah, the auction interpretation of Quantum Mechanics! Wonder what restrictions would need to be imposed on the bidders in order to preserve both the entanglement and relativity.
Thanks, all good points. Wish we all cared to apply modifications like that.
I also agree with off-the-shelf support advantages of Android, though the update mechanism outside of the play store seems to be nothing special. As for weighing the dimensions differently, there is definitely a significant difference: he puts premium on not rocking the boat, while mine is on delivering simple maintainable low-risk solutions. In general, simplicity is greatly undervalued. You probably can relate.
1. I’m just removing an unnecessary assumption, to avoid the discussion about what it means to be right or wrong, and whether there is a single right answer.
2. I don’t have the clout to change the boss’s mind. Making suboptimal decisions based on implicit unjustified assumptions and incomplete information, and then getting angry when challenged is something most of humans do at some point.
A few points. I am currently in the situation where the company I work for has been wasting millions, slipping schedule and increasing risk due to picking a wrong environment for the project (Android instead of Linux. Android is necessary for apps, but terrible beyond belief for IoT, where one does not need the playstore.) The owner of the company drank the Android Kool Aid and the project manager refuses to tell him what this marketing gimmick ending up costing. It would be impolitic for me to challenge the two of them, and it’s not my money that is wasted. We have an occasional “this would be so much easier on Linux” moment, but it goes nowhere, because the decision has been made and any change in the course, even if projected to save money, would be perceived as risky and would expose someone’s incompetence. So we are stuck at the “agree to disagree” stage, and the project manager makes the call, without any interest in discussing the merits and getting angry at any mention of an alternative.
Re your set of attitudes. I find that one does not need to believe in anything like “objective reality is real” to use the technique. So, let me modify your list a bit
Epistemic humility: “maybe I’m the wrong one” → “Maybe my approach is not the optimal one”Good faith “I trust my partner to be cooperating with me”Belief that objective reality is real → Belief that better approaches are possible”there’s an actual right answer here → “there is a chance of a better answer, where “better” can be agreed on by all parties” , and it’s better for each of us if we’ve both found it”Earnest curiosity
I’m confused about something. In reality there are no perfect dice, all dice are biased in some way, intentionally or not. Thus wouldn’t a more realistic approach be something like “Given the dataset, construct the (multidimensional) probability distribution of biases.” Why privilege the “unbiased” hypothesis?
Good points! Also helps one avoid the Goodhart trap, optimizing for a wrong thing. Also applies to savings: Dilbert’s 9-point financial plan is one of satisficing, not optimizing:
Dilbert creator Scott Adams claims this is “everything you need to know about personal investing”:
Make a will
Pay off your credit cards
Get term life insurance if you have a family to support
Fund your 401k to the maximum [Or your local equivalent employer contribution matching]
Fund your IRA to the maximum [Or your local equivalent of tax-deductible investment and/or tax-free interest growth]
Buy a house if you want to live in a house and can afford it
Put six months worth of expenses in a money-market account
Take whatever money is left over and invest 70% in a stock index fund and 30% in a bond fund through any discount broker and never touch it until retirement
If any of this confuses you, or you have something special going on (retirement, college planning, tax issues), hire a fee-based financial planner, not one who charges a percentage of your portfolio
This model raises an important question (with implications for the real world): if you’re a detective in the kingdom of the gullible king who is at least somewhat aware of the reality of the situation and the distortonary dynamics, and you want to fix the situation (or at least reduce harm), what are your options?
I suspect that is not the first question to ask. In the spirit of Inadequate Equilibria, a better initial question would be, “Can you take advantage of the apparent irrationality of the situation?”, and “What fraction of the population would have to cooperate to change things for the better?” and if there is no clear answer to either, then the situation is not as irrational as it seems, and the artificial optimism is, in fact, the best policy under the circumstances.
No idea what they will show off, but, however much I would like to have the internet at my nerve tips, it is unlikely to be that.
why is the above comment so badly downvoted?
I guess my point got lost in the shuffle. It’s right there in the OP, though. The adaptation is looking to an external higher power for answers. Initially it would have been where to hunt, but eventually Goodharted into praying and so on.
I see clear parallels with the treatment of Sabine Hossenfelder blowing the whistle on the particle physics community pushing for a new $20B particle accelerator. She has been going through the same adversity as any high-profile defector from a scientific community, and the arguments against her are the same ones you are listing.
Humans are trivial to kill. Physically, chemically, biologically or psychologically. And a combination of those would be even more effective in collapsing the human population. I will not go here into the details, to avoid arguments and negative attention. And if your argument is that humans are tough to kill, then look into the historic data of population collapse, and that was without any adversarial pressure. Or with, if you consider the indigenous population of the American continent.
t seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models.
Depending on the meaning of the word preferred. I tend to use “useful” instead.
my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no experiment we perform will produce a result we find surprising.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Yes, in this highly hypothetical case I would agree.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy?
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
I don’t disagree with the quoted part, it’s a decent description.
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
As I mentioned there, Jessica was apparently pissed and uncharacteristically uncharitable in her reply. The upvote count in this case seems to reflect tribal affiliations more than anything.
I said “a” not “the”. Yes, you could also quote Tegmark and Deutsch. I tend to favor a pragmatic approach to science, same as Sabine. You don’t have to, but it helps to realize that untestable models still “add up to normality”, to quote The Founder, and so have no bearing on your ethics.
Consider reading a real physicist’s take on the issue: Why the multiverse is religion, not science.
You got it backwards. Faith never recommends randomization, it justifies it. Like trusting the tea leaves to predict the future.
randomness in physics is cheap, and nature uses randomization in many rock-paper-scissors games without requiring religion or even brains.
Yes, randomness in physics is cheap, but I have a hard time finding examples of, say, a uniform or exponential distribution in the behaviors of higher animals. Just because something is cheap at a lower levels (e.g. quantum processes), it does not mean that it is cheap at the higher levels. I welcome examples of higher-levels rock-paper-scissors type of behavior.