Combining the two doesn’t solve the ‘biggest problems of utilitarianism’:
1) We know from Arrhenius’s impossibility theorems you cannot get an axiology which can avoid the repugnant conclusion without incurring other large costs (e.g. violations of transitivity, dependence of irrelevant alternatives). Although you don’t spell out ‘balance utilitarianism’ enough to tell what it violates, we know it—like any other population axiology—will have very large drawbacks.
2) ‘Balance utilitarianism’ seems a long way from the frontier of ethical theories in terms of its persuasiveness as a population ethic.
a) The write-up claims that actions that only actions that increase sum and median wellbeing are good, those that increase one or the other are sub-optimal, and those that decrease both are bad. Yet what if we face choices where we don’t have an option that increases both sum and median welfare (such as Parfit’s ‘mere addition’), and we have to choose between them? How do we balance one against the other? The devil is in these details, and a theory being silent on these cases shouldn’t be counted in its favour.
b) Yet even as it stands we can construct nasty counter-examples to the rule, based on very benign versions of mere addition. Suppose Alice is in her own universe at 10 welfare (benchmark this as a very happy life). She can press button A or button B. Button A boosts her up to 11 welfare. Button B boosts her to 10^100 welfare, and brings into existence 10^100 people at (10-10^-100) welfare (say a life as happy as Alice but with a pinprick). Balance utilitarianism recommends button A (as it increases total and median) as good, but pressing button B as suboptimal. Yet pressing button B is much better for Alice, and also instantiates vast numbers of happy people.
c) The ‘median criterion’ is going to be generally costly, as it is insensitive to changing cardinal levels outside the median person/pair so long as ordering is unchanged (and vice-versa).
d) Median views (like average ones) also incur costs due to their violation of separability. It seems intuitive that the choiceworthiness of our actions shouldn’t depend on whether there is an alien population on Alpha Centauri who are happier/sadder than we are (e.g. if there’s lots of them and they’re happier, any act that brings more humans into existence is ‘suboptimal’ by the lights of balance util).
(Very minor inexpert points on military history, I agree with the overall point there can be various asymmetries, not all of which are good—although, in fairness, I don’t think Scott had intended to make this generalisation.)
1) I think you’re right the German army was considered one of the most effective fighting forces on a ‘man for man’ basis (I recall pretty contemporaneous criticism from allied commanders on facing them in combat, and I think the consensus of military historians is they tended to outfight American, British, and Russian forces until the latest stages of WW2).
2) But it’s not clear how much the Germany owed this performance to fascism:
Other fascist states (i.e. Italy) had much less effective fighting forces.
I understand a lot of the accounts to explain how German army performed so well sound very unstereotypically facist—delegating initiative to junior officers/NCOs rather than unquestioning obedience to authority (IIRC some historical comment was the American army was more stiflingly authoritarian than the German one for most of the war), better ‘human resource’ management of soldiers, combined arms, etc. etc. This might be owed more to Prussian heritage than Hitler’s rise to power.
3) Per others, it is unclear ‘punching above one’s weight’ for saying something is ‘better at violence’. Even if the US had worse infantry, they leveraged their industrial base to give their forces massive material advantages. If the metric for being better at violence is winning in violent contests, the fact the German’s were better at one aspect of this seems to matter little if they lost overall.
It’s perhaps worth noting that if you add in some chance of failure (e.g. even if everyone goes stag, there’s a 5% chance of ending up −5, so Elliott might be risk-averse enough to decline even if they knew everyone else was going for sure), or some unevenness in allocation (e.g. maybe you can keep rabbits to yourself, or the stag-hunt-proposer gets more of the spoils), this further strengthens the suggested takeaways. People often aren’t defecting/being insufficiently public spirited/heroic/cooperative if they aren’t ‘going to hunt stags with you’, but are sceptical of the upside and/or more sensitive to the downsides.
One option (as you say) is to try and persuade them the value prop is better than they think. Another worth highlighting is whether there are mutually beneficial deals one can offer them to join in. If we adapt Duncan’s stag hunt to have a 5% chance of failure even if everyone goes, there’s some efficient risk-balancing option A-E can take (e.g. A-C pool together to offer some insurance to D-E if they go on a failed hunt with them).
[Minor: one of the downsides of ‘choosing rabbit/stag’ talk is it implies the people not ‘joining in’ agree with the proposer that they are turning down a (better-EV) ‘stag’ option.]
A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.
Happily, this factor has not been missed by either my profile or 80k’s work here more generally. Among other things, we looked at:
Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.)
Areas within medicine which look particularly promising (3)
Why ‘direct’ clinical impact (either between or within clinical specialties) probably has limited variance versus (e.g.) research (4), also
I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine.
I still think trying to get a handle on the average case is a useful benchmark.
[I wrote the 80k medical careers page]
I don’t see there as being a ‘fundamental confusion’ here, and not even that much of a fundamental disagreement.
When I crunched the numbers on ‘how much good do doctors do’ it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren’t (in EA terms) that exciting in terms of direct impact.
In talks, I generally use the upper 95% confidence bound or central estimate of the doctor coefficient as a rough steer (it isn’t a significant predictor, and there’s reasonable probability mass on the impact being negative): although I suspect there will be generally unaccounted confounders attenuating ‘true’ effect rather than colliders masking it, these sort of ecological studies are sufficiently insensitive to either to be no more than indications—alongside the qualitative factors—that the ‘best (naive) case’ for direct impact as a doctor isn’t promising.
There’s little that turns on which side of zero our best guess falls, so long as we be confident it is a long way down from the best candidates: on the scale of intervention effectiveness, there’s not that much absolute distance between estimates (I suspect) Hanson or I would offer. There might not be much disagreement even in coarse qualitative terms: Hanson’s work here—I think—focuses on the US, and US health outcomes are a sufficiently pathological outlier in the world I’m also unsure whether marginal US medical effort is beneficial; I’m not sure Hanson has staked out a view on whether he’s similarly uncertain about positive marginal impact in non-US countries, so he might agree with my view it is (modestly) net-positive, despite its dysfunction (neither I nor what I wrote assumes the system ‘basically knows what it’s doing’ in the common-sense meaning).
If Hanson has staked out this broader view, then I do disagree with it, but I don’t think this disagreement would indicate at least one of us has to be ‘deeply confused’ (this looks like a pretty crisp disagreement to me) nor ‘badly misinformed’ (I don’t think there are key considerations one-or-other of us is ignorant of which explains why one of us errs to sceptical or cautiously optimistic). My impressions are also less sympathetic to ‘signalling accounts’ of healthcare than his (cf.) - but again, my view isn’t ‘This is total garbage’, and I doubt he’s monomaniacally hedgehog-y about the signalling account. (Both of us have also argued for attenuating our individual impressions in deference to a wider consensus/outside view for all things considered judgements).
Although I think the balance of expertise leans against archly sceptical takes on medicine, I don’t foresee convincing adjudication on this point coming any time soon, nor that EA can reasonably expect to be the ones to provide this breakthrough—still less for all the potential sign-inverting crucial considerations out there. Stumbling on as best we can with our best guess seems a better approach than being paralyzed until we’re sure we’ve figured it all out.
It looks generally redundant in most cases to me: Given how pervasive IQ-correlations are, I think most people can get a reasonable estimate of their IQ by observing their life history so far. E.g.
Performance on other standardised tests
Job type and professional success
Obviously, none of these are perfect signals, but I think taking them together usually gives a reasonable steer to a credible range not dramatically larger than test-restest correlations of an IQ test. An IQ test would still provide additional information, but I’m not sure there are many instances where (say) knowing the answer in a 5 point band versus a 10 point band is that important.
The case where I think it could be worthwhile is for those whose life history hasn’t generated the usual signals to review: maybe one was initially homeschooled and became seriously ill before starting employment/university, etc.
Googling around phrases like ‘perception of intelligence’ seems to be a keyword for a relevant literature. On a very cursory skim (i.e. no more than what you see here) it seems to suggest “people can estimate intelligence of strangers better than chance (but with plenty of room for error and bias), even with limited exposure”. E.g.:
Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women (Note in this study the assessment was done purely on looking at a photograph of someone’s face)
Accurate Intelligence Assessments in Social Interactions: Mediators and Gender Effects (Abstract starts with: “Research indicates that people can assess a stranger’s measured intelligence more accurately than expected by chance, based on minimal information involving appearance and behavior.”)
Thin Slices of Behavior as Cues of Personality and Intelligence. (Short 1-2min slices of behaviour in a variety of contexts leads to assessments by strangers that positively correlate with administered test scores for IQ and big 5)
As you say, Bob’s good epistemic reputation should count when he says something that appears wild, especially if he has a track record that endorses him in these cases (“We’ve thought he was crazy before, but he proved us wrong”). Maybe one should think of Bob as an epistemic ‘venture capitalist’, making (seemingly) wild epistemic bets which are right more often than chance (and often illuminating even if wrong), even if they aren’t right more often than not, and this might be enough to warrant further attention (“well, he’s probably wrong about this, but maybe he’s onto something”).
I’m not sure your suggestion pushes in the right direction in the case where—pricing all of that in—we still think Bob’s belief is unreasonable and he is unreasonable for holding it. The right responses in this case by my lights are two-fold.
First, you should dismiss (rather than engage with) Bob’s wild belief—as (ex hypothesi) all things considered it should be dismissed.
Second, it should (usually) count against Bob’s overall epistemic reputation. After all, whatever it was that meant despite Bob’s merits you think he’s saying something stupid is likely an indicator of epistemic vice.
This doesn’t mean it should be a global black mark to taking Bob seriously ever again. Even the best can err badly, so one should weigh up the whole record. Furthermore, epistemic virtue has a few dimensions, and Bob’s weaknesses in something need not mean his strengths in others be sufficient for attention esteem going forward: An archetype I have in mind with ‘epistemic venture capitalist’ is someone clever, creative, yet cocky and epistemically immodest—has lots of novel ideas, some true, more interesting, but many ‘duds’ arising from not doing their homework, being hedgehogs with their preferred ‘big idea’, etc.
I accept, notwithstanding those caveats, this still disincentivizes epistemic venture capitalists like Bob to some degree. Although I only have anecdata, this leans in favour of some sort of trade-off: brilliant thinkers often appear poorly calibrated and indulge in all sorts of foolish beliefs; interviews with superforecasters (e.g.) tend to emphasise things like “don’t trust your intuition, be very self sceptical, canvass lots of views, do lots of careful research on a topic before staking out a view”. Yet good epistemic progress relies on both—and if they lie on a convex frontier, one wants to have a division of labour.
Although the right balance to strike re. second order norms depends on tricky questions on which sort of work is currently under-supplied, which has higher value on the margin, and the current norms of communal practice (all of which may differ by community), my hunch is ‘epistemic tenure’ (going beyond what I sketch above) tends disadvantageous.
One is noting the are plausible costs in both directions. ‘Tenure’-esque practice could spur on crack pots, have too lax a filter for noise-esque ideas, discourage broadly praiseworthy epistemic norms (cf. virtue of scholarship), and maybe not give Bob-like figures enough guidance so they range too far and unproductively (e.g. I recall one Nobel Laureate mentioning the idea of, “Once you win your Nobel Prize, you should go and try and figure out the hard problem of consciousness”—which seems a terrible idea).
The other is even if there is a trade-off, one still wants to reach the one’s frontier on ‘calibration/accuracy/whatever’. Scott Sumner seems to be able to combine researching on the inside view alongside judging on the outside view (see). This seems better for Sumner, and the wider intellectual community, than Sumner* who could not do the latter.
FWIW: I’m not sure I’ve spent >100 hours on a ‘serious study of rationality’. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.
On the topic ‘under the hood’ here:
I sympathise with the desire to ask conditional questions which don’t inevitably widen into broader foundational issues. “Is moral nihilism true?” doesn’t seem the right sort of ‘open question’ for “What are the open questions in Utilitarianism?”. It seems better for these topics to be segregated, no matter the plausibility or not for the foundational ‘presumption’ (“Is homeopathy/climate change even real?” also seems inapposite for ‘open questions in homeopathy/anthropogenic climate change’). (cf. ‘This isn’t a 101-space’).
That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the ‘Project’ (even if this seems to be taken very broadly to normative issues in general—if Wei_Dai’s list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around “How can one best make decisions on unknown domains with scant data”, the superforecasting literature seems some of the lowest hanging fruit to pluck.
Yet community competence in these areas has apparently declined. If you google ‘lesswrong GJP’ (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here’s something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could ‘try at home’. (The same applies to RQ: Sotala wrote a cool sequence on Stanovich’s ‘What intelligence tests miss’, but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)
I don’t understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don’t see there being a ‘lessons from the superforecasting literature’ write-up here (I am slowly working on one myself).
Maybe I just missed the memo and many people have kept abreast of this work (ditto other ‘relevant-looking work in academia’), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of ‘is there anything there?’ for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.
[Minor] I think the first para is meant to be block-quoted?
There seem some foundational questions to the ‘Rationality project’, and (reprising my role as querulous critic) are oddly neglected in the 5-10 year history of the rationalist community: conspicuously, I find the best insight into these questions comes from psychology academia.
Is rationality best thought of as a single construct?
It roughly makes sense to talk of ‘intelligence’ or ‘physical fitness’ because performance in sub-components positively correlate: although it is hard to say which of an elite ultramarathoner, Judoka, or shotputter is fittest, I can confidently say all of them are fitter than I, and I am fitter than someone who is bedbound.
Is the same true of rationality? If it were the case that performance on tests of (say) callibration, sunk cost fallacy, and anchoring were all independent, then this would suggest ‘rationality’ is a circle our natural language draws around a grab-bag of skills or practices. The term could therefore mislead us into thinking it is a unified skill which we can ‘generally’ improve, and our efforts are better addressed at a finer level of granularity.
I think this is plausibly the case (or at least closer to the truth). The main evidence I have in mind is Stanovich’s CART, whereby tests on individual sub-components we’d mark as fairly ‘pure rationality’ (e.g. base-rate neglect, framing, overconfidence—other parts of the CART look very IQ-testy like syllogistic reasoning, on which more later) have only weak correlations with one another (e.g. 0.2 ish).
Is rationality a skill, or a trait?
Perhaps key is that rationality (general sense) is something you can get stronger at or ‘level up’ in. Yet there is a facially plausible story that rationality (especially so-called ‘epistemic’ rationality) is something more like IQ: essentially a trait where training can at best enhance performance on sub-components yet not transfer back to the broader construct. Briefly:
Overall measures of rationality (principally Stanovich’s CART) correlate about 0.7 with IQ—not much worse than IQ test subtests correlate with one another or g.
Infamous challenges in transfer. People whose job relies on a particular ‘rationality skill’ (e.g. gamblers and calibration) show greater performance in this area but not, as I recall, transfer improvements to others. This improved performance is often not only isolated but also context dependent: people may learn to avoid a particular cognitive bias in their professional lives, but remain generally susceptible to it otherwise.
The general dearth of well-evidenced successes from training. (cf. the old TAM panel on this topic, where most were autumnal).
For superforecasters, the GJP sees it can get some boost from training, but (as I understand it) the majority of their performance is attributed to selection, grouping, and aggregation.
It wouldn’t necessarily be ‘game over’ for the ‘Rationality project’ even if this turns out to be the true story. Even if it is the case that ‘drilling vocab’ doesn’t really improve my g, I might value a larger vocabulary for its own sake. In a similar way, even if there’s no transfer, some rationality skills might prove generally useful (and ‘improvable’) such that drilling them to be useful on their own terms.
The superforecasting point can be argued the other way: that training can still get modest increases in performance in a composite test of epistemic rationality from people already exhibiting elite performance. But it does seem crucial to get a general sense of how well (and how broadly) can training be expected to work: else embarking on a program to ‘improve rationality’ may end up as ill-starred as the ‘brain-training’ games/apps fad a few years ago.
On Functional Decision Theory (Wolfgang Schwarz)
I recently refereed Eliezer Yudkowsky and Nate Soares’s “Functional Decision Theory” for a philosophy journal. My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don’t publish my referee reports, but this time I’ll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy. I also want to explain why I think their account is wrong, which is a separate point.
I’m someone who both prefers and practises the ‘status quo’.
My impression is the key feature of this is limited (and author controlled) sharing. (There are other nifty features for things like gdocs—e.g. commenting ‘on a line’ - but this practice predates gdocs). The key benefits for ‘me as author’ are these:
1. I can target the best critics: I usually have a good idea of who is likely to help make my work better. If I broadcast, the mean quality of feedback almost certainly goes down.
2. I can leverage existing relationships: The implicit promise if I send out a draft to someone for feedback is I will engage with their criticism seriously (in contrast, there’s no obligation that I ‘should’ respond to every critical comment on a post I write). This both encourages them to do so, and may help further foster a collegial relationship going forward.
3. I can mess up privately: If what I write makes a critical (or embarrassing) mistake, or could be construed to say something objectionable, I’d prefer this be caught in private rather than my failing being on the public record for as long as there’s an internet archive (or someone inclined to take screen shots). (This community is no stranger to people—insiders or outsiders—publishing mordant criticisms of remarks made ‘off the cuff’ to infer serious faults in the speaker).
I also think the current status quo is a pretty good one from an ecosystem wide perspective too: I think there’s a useful division of labour between ‘early stage’ writings to be refined by a smaller network with lower stakes, and ‘final publications’ which the author implicitly offers an assurance (backed by their reputation) that the work is a valuable contribution to the epistemic commons.
For most work there is a ‘refining’ stage, which is better done by smaller pre-selected networks rather than of authors and critics mutually ‘shouting into the void’ (from the author’s side, there will likely be a fair amount of annoying/irrelevant/rubbish criticism; from a critic’s side, a fair risk your careful remarks will be ignored or brushed off).
Publication seems to be better for polished or refined work, as at this stage a) it hopefully it has fewer mistakes and so generally more valuable to the non-critical reader, b) if there is a key mistake/objection neglected (e.g. because the pre-selected network resulted in an echo chamber) disagreement between (‘steel-manned’) positions registered publicly and hashed seems a useful exercise. (I’m generally a fan of more ‘adversarial’ - or at least ‘adversarial-tolerant’ norms for public discussion for this reason.)
This isn’t perfect, although I don’t see the ‘comments going to waste’ issue as the greatest challenge (one can adapt one’s private comments to a public one to post, although I appreciate this is a costlier route than initially writing the public comment—ultimately, if one finds ones private feedback is repeatedly neglected, one can decline to provide it in the first place).
The biggest one I see is the risk of people who can benefit from a ‘selective high-quality feedback network’ (either contributing useful early stage criticism, having good early stage posts, or both) not being able to enter one. Yet so long as members of existing ones still ‘keep an eye out’ for posts and comments from ‘outsiders’, this does provide a means for such people to build up a reputation to be included in future (i.e. if Alice sees Bob make good remarks etc., she’s more interested in ‘running a draft by him’ next time, or to respond positively if Bob asks her to look something over).
Once again I plead that when you see that an expert community looks like they don’t know what their doing, it is usually more accurate to ‘reduce confidence’ in your understanding rather than their competence. The questions were patently not ‘about forms’, and covered pretty well the things I would have in mind (I’m a doctor, and I have fairly extensive knowledge of medical ethics).
Although ‘institutional oversight’ in medicine is often derided (IRB creep, regulatory burden, and so on and so forth), one of its main purposes is to act as a check on researchers (whatever their intent) causing harm to their patients, and the idea it is good to have other people besides the researcher (who might be biased) and the patient (who might be less well informed) be the only ones making these decisions. That typical oversight was bypassed here is telling, but perhaps unsurprising as no one would green-light violating a moratorium to subject healthy embryos to poorly tested medical procedures for at best marginal clinical benefit.
A lot of questions targeted how informed the consent was, because this was often relied upon in the presentation (e.g. “Well, we didn’t get the right mutation, but it was pretty close, and the parents were happy for us to go ahead, so we did”).
The ‘read and understand’ question (I’m using the transcript, so maybe there were dumber questions which were edited out) wasn’t a question about whether the patients were literate, but whether they had adequate understanding of (e.g.) the technical caveats which they were giving consent to proceed with (e.g. one mutation was a 15 del rather than a 32 del, which rather than the natural mutation which induces a frame shift and the non-functional protein gives a novel protein with a five aa removal, which may still generate an HIV susceptible protein and some remote chance of other biological effects).
The ‘training’ question is because establishing whether consent is ‘informed’, or providing the necessary information to make it so, isn’t always straightforward (have you ever had a conversation where you thought someone understood you, but later you found out they didn’t?) I did a fair amount of this in medschool, and I don’t think many people think this should be an amateur sport.
(As hopefully goes without saying, having two rounds of consent where in each the consent taker is a researcher with a vested interest in the work going ahead has obvious problems, and hence why we’re so keen on third party oversight).
I also see in the transcript fairly extensive discussion about risks (off-target worries would have been tacit knowledge to the audience, so some of this was pre-empted in the presentation then later picked at), and plans for followup etc.
I don’t see the ‘why aren’t you winning?’ critique as that powerful, and I’m someone who tends critical of rationality writ-large.
High-IQ societies and superforecasters select for demonstrable performance at being smart/epistemically rational. Yet on surveying these groups you see things like, “People generally do better-than-average by commonsense metrics, some are doing great, but it isn’t like everyone is a millionaire”. Given the barrier to entry to the rationalist community is more, “sincere interest” than “top X-percentile of the population”, it would remarkable if they exhibited even better outcomes as a cohort.
There’s also going to be messy causal inference worries that cut either way. If there is in some sense ‘adverse selection’ (perhaps alike IQ societies) for rationalists tending to have less aptitude at social communication, greater prevalence of mental illness (or whatever else), then these people enjoying modest to good success in their lives reflects extremely well on the rationalist community. Contrariwise, there’s plausible confounding where smart creative people will naturally gravitate to rationality-esque discussion, even if this discussion doesn’t improve their effectiveness (I think a lot of non-rationalists were around OB/LW in the early days): the cohort of people who ‘teach themselves general relativity for fun’ may also enjoy much better than average success, but it probably wasn’t the relativity which did it.
A deeper worry wrt to rationality is there may not be anything to be taught. The elements of (say) RQ don’t show much of a common factor (unlike IQ), correlate more strongly with IQ than one another, and improvements in rational thinking have limited domain transfer. So there might not be much of a general sense of (epistemic) rationality, and limited hope for someone to substantially improve themselves in this area.
Another thing I’d be particularly interested in is longer term follow-up. It would be impressive if the changes to conscientiousness etc. observed in the 2015 study persist now.
I’d be hesitant to defend Great Man theory (and so would apply similar caution) but I think it can go some way, especially for defending a fragility of history hypothesis.
In precis (more here):
1. Conception of any given person seems very fragile. If parents decide to conceive an hour earlier or later (or have done different things earlier in the day, etc. etc.), it seems likely another one of the 100 million available sperm fuses than the one which did. The counterpart seems naturally modelled by a sibling, and siblings are considerably different from one another.
2. Although sometimes (/often) supposed Great Men are mere errands of providence, its hard to say this is always the case. It seems the 20th century would have been pretty different if Hitler was not around to rise to power, the character of world religions would be different with siblings of Jesus, Muhammad etc. and Tolstoy’s brother probably wouldn’t have written War and Peace anyway. (Although maybe in some areas ramifications are less pronounced—Great Scientists may alter the timing of discoveries a bit, but it looks plausible that we’d have Relativity by now even without Einstein).
3. 1 and 2 suggests you could get a lot of scrambling of who is around. Even if it was inevitable there was a Mongol expansion, the precise nature of this seems sensitive to who is in charge, and so whether Ghengis Khan or his sibling was born. The precise details of this expansion (where gets encroached on first, which battles are fought, etc. etc.) does horizontally perturb whether, when (and with who) other people conceive children. These different children go on to alter vertically and horizontally who else is conceived, and so the conceptive chaos propagates. I’d semi-seriously defend the thesis that none of us would be here if Ghengis Khan’s parents decided to wait an hour before having sex.
4. This wouldn’t mean the world is merely putty to be sculpted by great men. But even if the stage (and dramatis personae) of history is set by broader factors, which actors take on the role might still have considerable effects on the performance.
I not sure t-tests are the best approach to take compared to something non-parametric, given smallish sample, considerable skew, etc. (this paper’s statistical methods section is pretty handy). Nonetheless I’m confident the considerable effect size (in relative terms, almost a doubling) is not an artefact of statistical technique: when I plugged the numbers into a chi-squared calculator I got P < 0.001, and I’m confident a permutation technique or similar would find much the same.
0: We agree potentially hazardous information should only be disclosed (or potentially discovered) when the benefits of disclosure (or discovery) outweigh the downsides. Heuristics can make principles concrete, and a rule of thumb I try to follow is to have a clear objective in mind for gathering or disclosing such information (and being wary of vague justifications like ‘improving background knowledge’ or ‘better epistemic commons’) and incur the least possible information hazard in achieving this.
A further heuristic which seems right to me is one should disclose information in the way that maximally disadvantages bad actors versus good ones. There are a wide spectrum of approaches that could be taken that lie between ‘try to forget about it’, and ‘broadcast publicly’, and I think one of the intermediate options is often best.
1: I disagree with many of the considerations which push towards more open disclosure and discussion.
1.1: I don’t think we should be confident there is little downside in disclosing dangers a sophisticated bad actor would likely rediscover themselves. Not all plausible bad actors are sophisticated: a typical criminal or terrorist is no mastermind, and so may not make (to us) relatively straightforward insights, but could still ‘pick them up’ from elsewhere.
1.2: Although a big fan of epistemic modesty (and generally a detractor of ‘EA exceptionalism’), EAs do have an impressive track record in coming up with novel and important ideas. So there is some chance of coming up with something novel and dangerous even without exceptional effort.
1.3: I emphatically disagree we are at ‘infohazard saturation’ where the situation re. Infohazards ‘can’t get any worse’. I also find it unfathomable ever being confident enough in this claim to base strategy upon its assumption (cf. eukaryote’s comment).
1.4: There are some benefits to getting out ‘in front’ of more reckless disclosure by someone else. Yet in cases where one wouldn’t want to disclose it oneself, delaying the downsides of wide disclosure as long as possible seems usually more important, and so rules against bringing this to an end by disclosing yourself save in (rare) cases one knows disclosure is imminent rather than merely possible.
2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
2.1: It seems easy to imagine cases where the general idea comprises most of the danger. The conceptual step to a ‘key insight’ of how something could be dangerously misused ‘in principle’ might be much harder to make than subsequent steps from this insight to realising this danger ‘in practice’. In such cases the insight is the key bottleneck for bad actors traversing the risk pipeline, and so comprises a major information hazard.
2.2: For similar reasons, highlighting a neglected-by-public-discussion part of the risk landscape where one suspects information hazards lie has a considerable downside, as increased attention could prompt investigation which brings these currently dormant hazards to light.
3: Even if I take the downside risks as weightier than you, one still needs to weigh these against the benefits. I take the benefit of ‘general (or public) disclosure’ to have little marginal benefit above more limited disclosure targeted to key stakeholders. As the latter approach greatly reduces the downside risks, this is usually the better strategy by the lights of cost/benefit. At least trying targeted disclosure first seems a robustly better strategy than skipping straight to public discussion (cf.).
3.1: In bio (and I think elsewhere) the set of people who are relevant setting strategy and otherwise contributing to reducing a given risk is usually small and known (e.g. particular academics, parts of the government, civil society, and so on). A particular scientist unwittingly performing research with misuse potential might need to know the risks of their work (likewise some relevant policy and security stakeholders), but the added upside to illustrating these risks in the scientific literature is limited (and the added downsides much greater). The upside of discussing them in the popular/generalist literature (including EA literature not narrowly targeted at those working on biorisk) is limited still further.
3.2: Information also informs decisions around how to weigh causes relative to one another. Yet less-hazardous information (e.g. the basic motivation given here or here, and you could throw in social epistemic steers from the prevailing views of EA ‘cognoscenti’) is sufficient for most decisions and decision-makers. The cases where this nonetheless might be ‘worth it’ (e.g. you are a decision maker allocating a large pool of human or monetary capital between cause areas) are few and so targeted disclosure (similar to 3.1 above) looks better.
3.3: Beyond the direct cost of potentially giving bad actors good ideas, the benefits of more public discussion may not be very high. There are many ways public discussion could be counter-productive (e.g. alarmism, ill-advised remarks poisoning our relationship with scientific groups, etc.). I’d suggest the examples of cryonics, AI safety, GMOs and other lowlights of public communication of policy and science are relevant cautionary examples.
4: I also want to supply other more general considerations which point towards a very high degree caution:
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
4.2: I take the examples of analogous fields to also support great caution. As you note, there is a norm in computer security of ‘don’t publicise a vulnerability until there’s a fix in place’, and initially informing a responsible party to give them the opportunity to to do this pre-publication. Applied to bio, this suggests targeted disclosure to those best placed to mitigate the information hazard, rather than public discussion in the hopes of prompting a fix to be produced. (Not to mention a ‘fix’ in this area might prove much more challenging than pushing a software update.)
4.3: More distantly, adversarial work (e.g. red-teaming exercises) is usually done by professionals, with a concrete decision-relevant objective in mind, with exceptional care paid to operational security, and their results are seldom made publicly available. This is for exercises which generate information hazards for a particular group or organisation—similar or greater caution should apply to exercises that one anticipates could generate information hazardous for everyone.
4.4: Even more distantly, norms of intellectual openness are used more in some areas, and much less in others (compare the research performed in academia to security services). In areas like bio, the fact that a significant proportion of the risk arises from deliberate misuse by malicious actors means security services seem to provide the closer analogy, and ‘public/open discussion’ is seldom found desirable in these contexts.
5: In my work, I try to approach potentially hazardous areas as obliquely as possible, more along the lines of general considerations of the risk landscape or from the perspective of safety-enhancing technologies and countermeasures. I do basically no ‘red-teamy’ types of research (e.g. brainstorm the nastiest things I can think of, figure out the ‘best’ ways of defeating existing protections, etc.)
(Concretely, this would comprise asking questions like, “How are disease surveillance systems forecast to improve over the medium term, and are there any robustly beneficial characteristics for preventing high-consequence events that can be pushed for?” or “Are there relevant limits which give insight to whether surveillance will be a key plank of the ‘next-gen biosecurity’ portfolio?”, and not things like, “What are the most effective approaches to make pathogen X maximally damaging yet minimally detectable?”)
I expect a non-professional doing more red-teamy work would generate less upside (e.g. less well networked to people who may be in a position to mitigate vulnerabilities they discover, less likely to unwittingly duplicate work) and more downside (e.g. less experience with trying to manage info-hazards well) than I. Given I think this work is usually a bad idea for me to do, I think it’s definitely a bad idea for non-professionals to try.
I therefore hope people working independently on this topic approach ‘object level’ work here with similar aversion to more ‘red-teamy’ stuff, or instead focus on improving their capital by gaining credentials/experience/etc. (this has other benefits: a lot of the best levers in biorisk are working with/alongside existing stakeholders rather than striking out on one’s own, and it’s hard to get a role without (e.g.) graduate training in a relevant field). I hope to produce a list of self-contained projects to help direct laudable ‘EA energy’ to the best ends.
Thanks for writing this. How best to manage hazardous information is fraught, and although I have some work in draft and under review, much remains unclear—as you say, almost anything could have some some downside risk, and never discussing anything seems a poor approach.
Yet I strongly disagree with the conclusion that the default should be to discuss potentially hazardous (but non-technical) information publicly, and I think your proposals of how to manage these dangers (e.g. talk to one scientist first) generally err too lax. I provide the substance of this disagreement in a child comment.
I’d strongly endorse a heuristic along the lines of, “Try to avoid coming up with (and don’t publish) things which are novel and potentially dangerous”, with the standard of novelty being a relatively uninformed bad actor rather than an expert (e.g. highlighting/elaborating something dangerous which can be found buried in the scientific literature should be avoided).
This expressly includes more general information as well as particular technical points (e.g. “No one seems to be talking about technology X, but here’s why it has really dangerous misuse potential” would ‘count’, even if a particular ‘worked example’ wasn’t included).
I agree it would be good to have direct channels of communication for people considering things like this to get advice on whether projects they have in mind are wise to pursue, and to communicate concerns they have without feeling they need to resort to internet broadcast (cf. Jan Kulveit’s remark).
To these ends, people with concerns/questions of this nature are warmly welcomed and encouraged to contact me to arrange further discussion.
This seems right to me, and at least the ‘motte’ version of growth mindset accepts that innate ability may set pretty hard envelopes on what you can accomplish regardless of how energetic/agently you pursue self improvement (and this can apply across a range of ability—although it seems cruel and ludicrous to suggest someone with severe cognitive impairment can master calculus, it also seems misguided to suggest someone in middle age can become a sports star if they really go for it). As you say, taking growth mindset ‘too far’ has a dark side in that we might start thinking that people fail or struggle because they aren’t trying hard enough (generally a fault which we morally criticise) rather than lacking the ability (generally ‘blameless’).
But I’d venture a broader criticism about growth mindset which apply both to the ‘motte’ form sketched above, but also its sincere use in the rationalist community—that we shouldn’t only ‘not take it too far’, but not take it anywhere at all:
1) Growth mindset as expounded by Dweck and colleagues has not weathered replication well. The most recent systematic reviews give extremely minor effects on achievement (r=0.1) and even smaller intervention effects (d=0.08). The authors of the meta-analysis are about as sceptical as I am about whether these residual effects are real, but even if real they are extremely minor compared to more traity things (you can get much better prediction of academic achievement by genotyping than assessing growth mindest).
2) There’s a natural story of reverse causation, which also applies to the closely related ‘internal versus external locus of control’. If you’re smart and living in propitious circumstances, you may be right in thinking “I can get good at this if I really try” for many different things. If you lack this good fortune, your belief “Even if I try really hard at something, for most somethings I probably won’t develop mastery (or even competence)” might be a case of accurate and laudable insight.
3a) I can think of more than a few occasions in my life where the latter was better for me. One was when I was contemplating what subjects to keep before I went to university, and I had discussions with various teachers along the lines of, “You’re good but not exceptional at this, maybe think about something else?” Or (in medical school) a conversation along the lines of, “You have dysgraphia, which probably makes you somewhat weaker at fine manual dexterity. Opthalmology requires really good fine manual dexterity, so maybe this isn’t the specialty for you.”
3b) It also seemed to serve me better when I couldn’t circumvent my limitations by picking a different line of work. I focused especially hard on training myself to perform practical procedures because I realised I was working from a disadvantage, and so had to try harder to be satisfactory (I maintained no illusions of becoming great at it).
3c) My impression is that conversations (or thinking) like this tend to be more emotionally difficult than more aspirational, “Don’t worry, you can do it!” exhortations. So I’d guess they are probably under-supplied from their optimum.
In essence, there’s an underlying empirical topic which ‘growth mindset’ relies upon: that a lot of whether one accomplishes something or not depends on mindset or attitude. The answer to that, as best as I can tell, is this isn’t really true: we live in a world which has the uncomfortable features where which tickets one draws from the genetic lottery, birthplace lottery, and early environment lottery (etc) determine the broad strokes of one’s life far more than particular efforts of will and mindset (and growth mindset in particular, which seems to have slim-to-no effect). Many things which are possible for someone are not possible for us, no matter what we do, and no matter how hard we try.
Then there’s a prudential question of (even if it isn’t true) whether it would be better to act and believe as-a growth mindset would suggest. Again, it doesn’t seem so: the evidence for mindset interventions working is slim to none, and insofar as one can survey anecdata my impression is ‘anti-growth mindset’ advice is undersupplied relative to its importance.
It is inarguable one should often persevere in work to improve oneself, to not give up ‘too soon’, and to encourage others when trying to do the same. Yet there are times when it is better to recognise the limits of one’s abilities, that one should cut their losses, and to shoulder the burden of (if one believes it to be the case) telling someone they should quit something because they ‘don’t have what it takes’. The right judgement in these cases is a matter for practical wisdom. Insofar as growth mindset as it is preached (but also as it is practised) biases us more to the former sort of behaviour, it should be resisted.