I think this is sometimes true but often not.
Andrew Denton, an Australian journalist, did a podcast about the question of euthanasia ( well worth the listen https://www.wheelercentre.com/broadcasts/podcasts/better-off-dead). During this process he attended a right to life conference. During the conference speakers spoke openly about the fact that the arguments they used in public against voluntary euthanasia were not at all their own reasons for opposing it.
In summary their actual reason for opposing VE is that in Christian theology you are not allowed to die until Jesus decides to take you / that you have suffered enough. Because this reason is unacceptable to most people, they said that they would try on various arguments and use the ones that seemed to resonate e.g. Hitler used euthanasia as an excuse to murder people, people will kill granny to get the inheritance, people will kill the disabled and other “useless eaters” , governments will encourage euthanasia to save aged care dollars.
In American politics Donald Trump started using the phrase “Drain the Swamp” frequently when he noticed that people responded to it. I leave it to the reader to judge whether it was his intention to drain the swamp, or whether he even thought it was possible.
In general IMHO people often advance bogus arguments because they know their real reasons will not be acceptable. In fact there is some evidence that confabulation is a core competency of the human brain. See e.g. https://en.wikipedia.org/wiki/Split-brain
Follow up on Herbert Simon. From his book “Models of my life”
Worked 60-80 hours a week. But does not detail what “work” means.
When collaborating with someone he comments that most of his day’s work would be usually done by 10am, about the time his collaborator would be getting started. This perhaps hints that early in the day he did a few hours of really hard intellectual work.
What HS regarded as hard work may differ from other people. For example he learned about 20 languages to the point of being able to read papers, and 4-5 to the level of reading literature. But he regarded this as a fun/hobby thing.
He had a problem of hobbies turning into work, and had to drop several of them (e.g. playing musical instruments).
At college he only did enough work to get graded at A. Early on he spent too much time playing ping-pong and his grades slipped.
He published ~1,000 papers and 37 books and accrued to date over 350,000 citations. So he was amazingly productive.
He spent a lot of time on office politics and other managerial and administrative things.
He found writing easy and so wrote many/most of the papers he was a collaborator on.
Conclusion: HS was very smart, very productive, found things that were challenging for others to be fun/hobbies, and while it seems he did work long hours, it is not clear how much time he spent at the highest level of effort. There are hints he did concentrate his top tier work in the first few hours of the day.
As the emotional part of our brain sees imagination and memory as the same, this resolved the trauma
I think you are talking about something downstream from the problem OP reported. What you said explains why changing the memory would help. But I think it is not relevant to the question of whether you *can* change the memory.
If there are parts of you that think that holding on to the memory and to whatever partial solutions you came up with at the time are important, you will have trouble changing that, no matter what the benefits would be after the fact.
And of course given the traumatic nature of such memories, holding onto them and to the solutions you found do tend to seem very important. Books and reports of therapy are full of examples of this kind of thing.
Also to reinforce a very important point: even when experts are not very expert, they are probably a lot better than you+google+30minutes!
Good post. To which I would add...
There is much more to expertise than forecasting. Also
Designing and building “things” of various kinds that work
“Things” could include social systems, people, business structures, advertising campaigns, not just machines of course.
A person may be a very good football coach in the sense of putting together a team that wins, or fixing a losing team, but may not be too good at making predictions. Doctors are notoriously bad a predicting patient outcomes but are often be very skilled at actually treating them.
I think to a degree you confuse assessing whether a group does have expertise with assessing whether they are *likely* to have expertise.
As far as factors that count against expertise being reliably or significantly present, to your politics I would add
1. Money. The medical literature is replete with studies showing huge effect sizes from “who paid the piper”. In pharmaceutical research this seems to result in a 4X different chance of a positive result. But there is more than this; the ability to offer speaking and consultancy fees, funding of future projects etc can have a powerful effect.
Another example is the problem alluded to in relation to the consensus about the historical Jesus. When a field is dominated by people whose livelihood depends on their continuing to espouse a certain belief, the effect goes beyond those individuals and infects the whole field.
2. The pernicious effects of “great men” who can suppress dissent against their out of date views. Is the field pluralistic, realistically, and is dissent allowed? Science advances funeral by funeral. Have a look at what happened to John “Pure white and deadly” Yudkin.
3. Politics beyond what we normally think of as politics. Academia is notoriously “political” in this wider sense. Amplifying your point about reality checks, if feedback is not accurate, rapid, and unambiguous, it is hard for people in the field to know who is right, if anyone.
4. “Publish” or perish. There are massive incentives to get published or to get publicity or a high profile. This leads to people claiming expertise, results, achievements that are bogus. Consider for example the case of Theranos, which seemed, if media reports are accurate, to have no useful ability to build systems that did pathology tests, yet apparently hoodwinked many into thinking that they did.
You make a good point that claims of expertise without evidence or, worse, in the face of adverse evidence, are really really bad. I would go as far as to say that if you claim expertise but cannot prove it, I have a strong prior that you don’t have it.
There are large groups of self-described experts who do not have expertise or at best have far less than they think. One should be alert to the possibility that “experts” aren’t.
Having said all that, there is a crying need for more work in this area.
The current lead I am following up is Herbert Simon. Will also check out Knuth.
Someone suggested Flaubert, who worked 12 hours a day. And produced 0.7 (really well honed) words per hour.
I see now how this could happen, and evidently it happened to you.
It has not happened to me, even though I used it quite aggressively e.g. to instil objectively false but useful beliefs.
I am trying to work out what is different… I did this as part of the IFS (Internal Family Systems) process, as a more powerful way to resolve exiles that are hard to fix.
I suspect maybe the difference is that in IFS they make a huge deal about honoring the ‘parts’ including exiles. In your terms this would be the unhelpful beliefs. You need ideally to fully accept that they are there for a reason and have good intentions. In IFS it is a common rookie mistake to try to shove ‘bad’ “parts” (in IFS terms) away prematurely and tell them to stop doing or believing that thing right away. If you do this they will often resist vehemently in open or in covert ways. Once you do get to know them, appreciate them, acknowledge their good intentions, they are then often very willing to form the intention to change, and in this case they will not resist.
So my suggestion would be to try to get to know the ‘false’ belief better and to acknowledge why it is there, the good it did, the good intention behind it—and with associated beliefs—there can be quite a complex structure of chained beliefs and practices. Only then do you ask it, are you happy with the current set-up? Would you like to change anything? Ask if you do really want to change the belief in every bone of your body. Usually at this point it is pretty easy to change and you are done.
If the ‘exile’ *wants* to change but cannot then the UTEB techniques can be very useful. I will give one example.
As a very young student I had a vicious and sadistic teacher. Apart from her beatings, she employed psychological terror tactics seemingly designed to maximize our terror and helplessness and humiliation. I had frequent flashbacks which I see as a form of hyper-vigilance whose intention was to keep me safe. I tried all the usual techniques for resolving my flashbacks. We are here now, she is dead, I have adult resources that can protect you, I can hold you, etc, etc. These helped a bit but not entirely.
So when everything else did not succeed entirely I tried the “nuclear option”—rewriting history. I implanted a belief that the very first time she exhibited her toxic behavior a group of parents stormed into the classroom, beat her up, threw her out of the school, and warned her never to set foot in a school again, which she never did (in the rewritten history). We reverted back to our previous teacher who was lovely. This worked, even though—at some level—I know it is false. I think it worked because all the parts of me were united in resolving this issue and there was no internal conflict apart from the ongoing feelings of fear and anxiety being too strong.
So again I think you may perhaps have had some residual internal conflict about changing the belief and this may be why you did not succeed at times. I hope this helps.
1. People may confuse what I did with a revenge fantasy. I don’t think revenge fantasies are very often useful. This is different because the bad thing, in the rewritten history, did not happen. There is nothing to revenge.
2. Assuming my post makes sense to you, it may illustrate why the seemingly preposterous IFS model can be quite useful—it gives you a powerful language and structure for dealing with all these internal complexities.
This is a good example of where the system makes it difficult to do sensible rational things.
It’s also an example of how the medical system assumes it will always be there, will not make mistakes, don’t you worry etc.
It is not just for natural disasters that you need a backup supply. I found out the hard way that the capriciousness of the medical system can really hurt you. I showed up at the doctor had he told me that the drug I needed was not longer available from him, and I had to see a new specialist and “requalify”. There was no way this could be done before I ran out. No-one had thought to warn me that the rules had changed. In fact they had not changed, formally. There was just this silent ‘crackdown’. The word was put out that you betta not prescribe that any more, or maybe you would get audited, or raided, maybe lose your licence or medicare accreditation—there are a lot of ways we can hurt you. Maybe there will be a complaint against you that will burn a couple of years of your life to fight it.
There was a similar recent case of this with the prescription opiate crackdown. This was not what affected me, but the situation was similar. People showed up at the doctor’s for a renewal and were told “nope”. Your choice is cold turkey or the street. Doctors were prescribing opiates like candy and then they were not. Too bad if you were caught up in it.
As for solutions, most people suggest that putting a little bit away over time is the only solution. It is a waste of time trying to persuade the system to help you put together a just-in-case supply.
I have guessed that by CDT you mean https://en.wikipedia.org/wiki/Causal_decision_theory
But why make people guess?
Protip: define and/or provide links for opaque terms upon first use.
GR and QM are valid each in their own domain.
Their domain is supposed to be the universe, I think. Later people said GR is for the large scale and QM is for the small scale but nothing in the theories actually says this, AFAICT.
It could be that a straightforward extension of one or the other would solve the problem, somehow embracing or correcting the other. But all the obvious ways to do that have been explored and have failed.
Or it could be that both are fundamentally conceptually wrong, like Newtonian gravity was ‘wrong’ (though quite accurate most of the time). If that is the case the actual solution would look very different and would then be shown to approximate QM and GR in limiting cases.
String theory is not really a theory of physics; it is more like the idea that a certain type of theory, not yet identified, may work. So it is more of an approach or a program. But even if ST is successful, it would leave a lot of unanswered questions. And after decades their is not much sign of a breakthrough.
To be fair one key problem is a lack of data. If we could build accelerators 10^12 times as powerful as current ones, we may have something to work on. But there are so many possible theories given current data. Given no data, and no way to test theories, physics degenerates into a popularity contest.
experimental proof that hidden variables is wrong (through the EPR experiments)
Local hidden variable theories were disproved. But that is not at all surprising given that QM is IMHO non-local, as per Einstein’s “spooky nonlocality”.
It is interesting that often even when Einstein was wrong, he was fruitful. His biggest mistake, as he saw it, was the cosmological constant, now referred to as dark energy. Nietzsche would have approved.
On QM his paper led to Bell’s theorem and real progress. Even though his claim was wrong.
It looks like I have to read the whole post to see whether it is of interest to me, because there is no summary. Instead you seem to just wade in to the detail.
I tried reading the first sentences of each paragraph but that was useless because they are almost all opaque references to the previous material.
I suggest you add a summary and start paragraphs with a sentence encapsulating the key idea of the paragraph.
I downvoted because in brief a) this article is very one-sided b) When you read human history, the plethora of collapses IMHO puts a strong onus of proof on those who argue it won’t happen again c) There are many warning signs of huge problems ahead—global warming, resource depletion (soils, fresh water, phosphates, oil, coal, uranium, numerous other minerals), overpopulation, increasing proliferation of nuclear weapons d) Our so clever civilization depends utterly on cheap energy and this looks like ending fairly soon e) There is no clear evidence that technological progress is rapid enough to solve these problems.
On bias see here https://www.bmj.com/content/335/7631/1202 and references. There is a lot of research about this. Note also that you do not even need to bias a particular researcher, just fund the researchers producing the answers you like, or pursuing the avenues you are interested in e.g. Coke’s sponsorship of exercise research which produces papers suggesting that perhaps exercise is the answer.
One should not simply dismiss a study because of sponsorship, but be aware of what might be going on behind the scenes. And also be aware that people are oblivious to the effect that sponsorship has on them. One study of primary care doctors found a large effect on prescribing from free courses, dinners, etc, but the doctors adamantly denied any impact.
The suggestions of things to look for are valid and useful but often you just don’t know what actually happened.
Mostly belatedly realizing that studies I took as Gospel turned out to be wrong. This triggered an intense desire to know why and how.
Mostly medicine, nutrition, metabolism. Also finance and economics.
For people wanting to understand Kegan’s key ideas without too much pain, I suggest “The Discerning Heart” by Philip Lewis. It is a concise and excellent introduction to the topic.
One oversight I see often in this space, and here, relates to a carbon tax. It is stated that the revenue from a carbon tax can be used to compensate people, especially lower income people, for the increased cost of living resulting from the tax. The fatal problem with this is that in a zero emissions world, there will be no emissions and therefore no carbon tax revenue.
Of course it may be possible to compensate people via other means such as other taxes. But a carbon tax is only required because it is otherwise cheaper to emit carbon. This means costs will go up overall and that there will be a net loss (in the short term at least). There is no free lunch and someone will have to pay.
One of the most miserable things about the LW experience is realizing how little you actually know with confidence.
I’ve probably read about 1000 papers. Lessons learned the hard way...
1. Look at the sponsorship of the research and of the researchers (previous sponsorship, “consultancies” etc are also important for up to 10-15 years). This creates massive bias. E.g: A lot of medical bodies and researchers are owned by pharmaceutical companies
2. Look at ideological biases of the authors. E.g. a lot of social science research assumes as a given that genes have no effect on personality or intelligence. (Yes, really).
3. Understand statistics very deeply. There is no pain-free way to get this knowledge, but without it you cannot win here. E.g. a) The assumptions behind all the statistical models b) the limitations of alleged “corrections”. You need to understand both Bayesian and Frequentist statistics in depth, to the point that they are obvious and intuitive to you.
4. Understand how researchers rig results. e.g. undisclosed multiple comparisons, peeking at the data before deciding what analysis to do, failing to pre-publish the design and end points and to follow that pre-publication, “run-in periods” for drug trials, sponsor-controlled committees to review and change diagnoses… There are papers about this e.g. “why most published research findings are false”.
5. After sponsorship, read the methods section carefully. Look for problems. Have valid and appropriate statistics been used? Were the logical end points assessed? Maybe then look at the conclusions. Do the conclusions match the body of the paper? Has the data from the study been made available to all qualified researchers to check the analysis? Things can change a lot when that happens e.g. Tamiflu. Is the data is only available to commercial interests and their stooges this is a bad sign.
6. Has the study been replicated by independent researchers?
7. Is the study observational? If so, does is meet generally accepted criteria for valid observational studies? (large effect, dose-response gradient, well understood causal model, well understood confounders, confounders smaller than the published effect etc).
8. Do not think you can read abstracts only and learn much that is useful.
9. Read some of the vitriolic books about the problems in research e.g. “Deadly Medicines and Organised Crime How big pharma has corrupted healthcare” by PETER C GØTZSCHE. Not everything in this book is true but it will open your eyes about what can happen.
10. Face up to the fact that 80-90% of studies are useless or wrong. You will spend a lot of time reading things only to conclude that there is not much there.