Former tech entrepreneur (co-creator of the music software Sibelius). Among other things I now play the stock market, write software to predict it, and occasionally advise tech startups. I have degrees in philosophy.
bfinn
There are ways of showing that you are probably being honest in such situations and thereby making yourself more credible than those that are not. Viz. setting out your own weaknesses. For example, in business plans for investors this can be done in a SWOT analysis (which includes listing weaknesses and threats, as well as how you aim to deal with them).
People who claim to have no weaknesses, or at least do not mention any, or who only admit to slight weaknesses (and not to obvious larger ones), lack credibility.
This seems a great summary to me (an outsider). My own stab at it: this rationality movement seems to be about the wide application of science + analytic philosophy, especially philosophy of science (though most involved don’t know much philosophy, so don’t realise this). (Cf EA is about application of philosophy of ethics, especially of course utilitarianism.)
The novel aspect seems to be mainly the membership and the application, ie beyond normal science/technology and academia.And the distinction with post-rationality (about which I know little) seems somewhat like early vs late Wittgenstein (ie formal analysis eg logic, math vs more hand-wavy nuance incorporating social function, etc).
I think the error is not just that they generalised incorrectly, but that they didn’t know enough to be justified in doing so. So it combines overconfidence and over/misgeneralising.
The word ‘sophomoric’ includes some of the right connotations. One definition says ‘Overconfident but immature or poorly informed’.
So though ‘sophomoric’ is not quite specific enough itself, maybe it could be used to make a new phrase eg ‘sophomoric bias’ or ‘sophomoric generalisation’.
Interesting post. Maybe others have mentioned this, but a difference with startup founders is charisma, as they often lack it (eg tech nerds are famously uncharismatic), though of course it helps.
Also incidentally this post highlights how different the US and UK (and various other western countries) are in their attitude to religion; in the UK church attendance is tiny, and open religiosity is almost universally seen as weird and embarrassing. So this whole church planting thing seems very odd.
Years ago a small part of my work involved proof-reading successive editions of a book (a 500-page manual). I would write my suggested changes on a printout—not typo corrections, but improvements to wording & content requiring thought & expertise.
Once when doing this I had a slight sense of deja vu after correcting a page, so I looked up the same marked-up page in an earlier edition I had proof-read a year or more before. Not only had I marked the exact same changes (which mistakenly had not been implemented), but used almost identical pen-strokes, including both times changing a word, then thinking better of it and crossing out my change in favour of a different suggestion. So I had clearly gone through an identical thought process over several minutes for the whole page. (I still have both pages somewhere.)I wondered at the time if psychologists had ever studied this kind of thing.
No offence to JW, but incidentally is there a term for the common cognitive bias where someone who knows a lot about X assumes (incorrectly) the same applies to superficially similar things Y that they know little about? More specific than mere ‘overconfidence’ or ‘overgeneralising’.
I think your point is roughly what I thought, viz.: isn’t this just loss aversion?
Mostly agree, but a way in which the post might partly map onto the UK is this:
Governments know they’ll lose power in a few years, at which point any controversial legislation they enacted will be reversed by the opposing dictatorship. I.e. the other major faction has a veto, but in the future. So there is still a benefit in seeking consensus.
(Often the non-government party will feign strong opposition to legislation to make headlines and look important, but will not actually reverse it when they subsequently get into power.)
Contrariwise, it seems odd that stone tool making is not a popular hobby, given what a crucial activity it was for 99% of our history.
Which suggests maybe we rapidly unevolved interest in it soon after the Stone Age.
We had many other handicrafts which continued to be useful and so persisted (even to this day—some only losing their usefulness very recently with industrialisation, continuing for now as hobbies not yet affected by evolution (eg knitting).) But stone tools are not among them.
In case no-one else has raised this point:
From the AI’s perspective, modifying the AI’s goals counts as an obstacle. If an AI is optimizing a goal, and humans try to change the AI to optimize a new goal, then unless the new goal also maximizes the old goal, the AI optimizing goal 1 will want to avoid being changed into an AI optimizing goal 2, because this outcome scores poorly on the metric “is this the best way to ensure goal 1 is maximized?”.
Is this necessarily the case? Can’t the AI (be made to) try to maximise its goal knowing that the goal may change over time, hence not trying to stop it from being changed, just being prepared to switch strategy if it changes?
A footballer can score a goal even with moving goalposts. (Albeit yes it’s easier to score if the goal doesn’t move, so would the footballer necessarily stop it moving if he could?)
Orwell noted that the semicolon is almost redundant. I wonder if sentences that once would have had a semicolon half way through are now split into two sentences.
No except that, as mentioned, maybe I have particularly sensitive feet.
Great, thanks!
Yes, still doing this every morning, and it still works same as ever!
Somewhat relatedly, about 10 years ago I heard someone on the radio predicting that a long-term effect of social media would be greater acceptance of others’ flaws, particularly youthful indiscretions that previously would have damaged a policitican’s career—e.g. that they had smoked marijuana at college.
Such indiscretions would now be permanently documented on say Facebook when they occur. So everyone would gradually get used to the idea that such things are widespread and almost-normal, and almost all future politicians would be found to have such flaws/misdemeanours in their past. Expecting them to have none would become unrealistic, and if anything a politician with nothing bad visible in their past would seem not just squeaky-clean but abnormal, perhaps even weird.This seemed quite a profound observation at the time, at least for the kind of analysis usually heard on the radio. But in the 10 years since I haven’t particularly noticed this trend in public attitudes emerging. Or at least, judging by social media (probably unwise), in many ways there seems to be less tolerance than before. Or more likely a mixed picture—there is more tolerance of some things (e.g. being trans) but less of others (e.g. unfashionable views), and (as is often observed) social media amplifies intolerance as it makes for stronger clickbait. So possibly the overall trend in tolerance, or at least of tolerance apparent on social media, is flatter than predicted.
Re departmental historians: the UK’s Foreign Office does have (or had) a small team of historians; someone I know was one of them for a year or two. Apparently they were writing up the history of the Foreign Office in chronological order, at slower than real time; hence were falling further and further behind. They had got up to 1947 or something, but would never catch up. When they completed the history of a year, it was published in an internal book (I assume not publicly available due to national security etc.), which went on a shelf and no-one ever read.
Each year the historians had to submit a justification for their continued existence. The guy I know said there was none, and they should just write: “The Foreign Office should close down its history department.”
I suppose what this shows is that if internal historians have a use, it’s important that they know the full departmental history, particularly recent decades; they are involved in departmental decisions; and the history is as up-to-date as possible.
Upvote, not least for my first ever sighting in the wild of the interrobang.
The thickness has units of something like [effect]/[work]
I.e. presumably benefit/cost (work being a cost, whether financial or not), = the benefit-cost ratio (BCR) used in cost-benefit analysis in economics.
Is the moral of this really that all decisions should be made so as to maximize the ultimate goal of happiness x longevity (of you or everyone), in utilitarian fashion; whereas maximizing for subgoals is sometimes/often a poor proxy?
Or is it impractical to do utilitarian calculus all the time, but calculations/heuristics with the thin and thick lines can clarify the role of the subgoals so they can be used as adequate proxies?
(It’s partly unclear in my head as I didn’t grok the exact meaning of the lines & their thicknesses. And it’s too late at night for me to think about this!)
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
Kudos for this heading. A passing pun on someone’s name is a great way of poking fun & mildly insulting them (warranted in this case). I am reminded of a paper critiquing one by QM physicist Henry Stapp, entitled “A Stapp in the wrong direction”.