My sentence was trash, sorry!
What I should have said:
“I have a 0.5ish estimate of the minicamps success, with big error bars. The suspicious behaviour is no more than minute confirmation for it being a failure.”
Or something like that.
My sentence was trash, sorry!
What I should have said:
“I have a 0.5ish estimate of the minicamps success, with big error bars. The suspicious behaviour is no more than minute confirmation for it being a failure.”
Or something like that.
Do great writers (fiction or not) say that following composition advice helped them? Do they give consistent composition advice themselves? If the answer to both questions is ‘no’, that suggests that great writing is not something easily trainable.
(Disclaimer: I think Luke’s a really good writer, so don’t read this as “if you haven’t got it, you never will, give up”).
I may dip my toe in the water and turn my prezi into my first proper LW post, time permitting. I’d be interested to see what you guys think.
(Belated reply): I can only offer anecdotal data here, but as one of the members of GWWC, many of the members are interested. Also, listening to the directors, most of them are also interested in x-risk issues.
You are right in that GWWC isn’t a charity (although it is likely to turn into one), and their recommendations are non-x-risk. The rationale for recommending charities is dependent on reliable data: and x-risk is one of those things where a robust “here’s more much more likely happy singularity will be if you give to us” analysis looks very hard.
(I was going to write a post on ‘why I’m skeptical about SIAI’, but I guess this thread is a good place to put it. This was written in a bit of a rush—if it sounds like I am dissing you guys, that isn’t my intention.)
I think the issue isn’t so much ‘arrogance’ per se—I don’t think many of your audience would care about accurate boasts—but rather your arrogance isn’t backed up with any substantial achievement:
You say you’re right on the bleeding edge in very hard bits of technical mathematics (“we have 30-40 papers which could be published on decision theory” in one of lukeprogs Q&As, wasn’t it?), yet as far as I can see none of you have published anything in any field of science. The problem is (as far as I can tell) you’ve been making the same boasts about all these advances you are making for years, and they’ve never been substantiated.
You say you’ve solved all these important philosophical questions (Newcomb, Quantum mechanics, Free will, physicalism, etc.), yet your answers are never published, and never particularly impress those who are actual domain experts in these things—indeed, a complaint I’ve heard commonly is that Lesswrong just simply misunderstand the basics. An example: I’m pretty good at philosophy of religion, and the sort of arguments Lesswrong seems to take as slam-dunks for Atheism (“biases!” “Kolmogorov complexity!”) just aren’t impressive, or even close to the level of discussion seen in academia. This itself is no big deal (ditto the MWI, phil of mind), but it makes for an impression of being intellectual dilettantes spouting off on matters you aren’t that competent in. (I’m pretty sure most analytic philospohers roll their eyes at all the ‘tabooing’ and ‘dissolving problems’ - they were trying to solve philosophy that way 80 years ago!) Worse, my (admittedly anecdotal) survey suggests a pretty mixed reception from domain-experts in stuff that really matters to your project, like probability theory, decision theory etc.
You also generally talk about how awesome you all are via the powers of rationalism, yet none of you have done anything particularly awesome by standard measures of achievement. Writing a forest of blog posts widely reputed to be pretty good doesn’t count. Nor does writing lots of summaries of modern cogsci and stuff.
It is not all bad. Because there are lots of people who are awesome by conventional metrics and do awesome things who take you guys seriously, and meeting these people has raised my confidence that you guys are doing something interesting. But reflected esteem can only take you so far.
So my feeling is basically ‘put up or shut up’. You guys need to build a record of tangible/‘real world’ achievements, like writing some breakthrough papers on decision theory (or any papers on anything) which are published and taken seriously in mainstream science, a really popular book on ‘everyday rationality’, going off and using rationality to make zillions from the stock market, or whatever. I gather you folks are trying to do some of these: great! Until then, though, your ‘arrogance problem’ is simply that you promise lots and do little.
Hello there, I’m the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like “if I extend my life to 10n then 9 other peeps who would have lived n years like me would not” will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an ‘everyone gets 80 years’ versus ‘one of you gets 800, whilst the rest of you get nothing’, I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever ‘comes first’ in the existence lottery to refrain from life extension to allow subsequent persons to ‘have their go’.
If you don’t buy that future persons are objects of moral concern, then the foregoing won’t apply. But I think there are good reasons to treat them as objects of full moral concern (including a ‘right’/‘interest’ in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don’t think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.
If you dislike justice (or future persons), there’s a plausible aggregate-only argument (which bears a resemblance to Singer’s work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that’s true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the ‘decay’ of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.
Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don’t have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I’ll have something better developed by then!
If infinite person years, then (so long as life is net positive) we have infinite utility, and I can’t see obviously whether doling this out to a ‘smaller’ or ‘larger’ set of people (although both will have same cardinality) will matter. But anyway, I don’t think anyone really thinks we can wring infinite amounts of life out of the universe.
Total life-time will have some upper bound. So in worlds where we are efficiently filling up lifespan, the choice is between more short-lived people or fewer long-lived people. In the real world for the foreseeable future, that won’t quite apply—plausibly, there will be chunks of lifetime that can only be got at by extending your life, and couldn’t be had by a future person, so you doing so doesn’t deprive anyone else. However, that ain’t plausible for an entire society (or a large enough group) extending their lives. Limiting case: if everyone made themselves immortal, they could only add people by increases in carrying capacity.
Hello Kaj,
If you reject both continuity of identity and prioritarianism, then there isn’t much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
However, if you think you should maximize expected value under normative uncertainty (and you aren’t absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns ‘either way’ turn out to be a wash between immortal society and ‘healthy aging but die’ society, then the justice/prioritarian concerns I point to might ‘tip the balance’ in favour of the latter even if you aren’t convinced it is the right theory. What I’d hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.
Suppose old person and child (perhaps better: young adult) would both gain 2 years, so we equalize payoff. What then? Why not be prioritarian at the margin of aggregate indifference?
I’m not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like “experience-moments in ‘older’ lives are not as good as younger ones”. Like you, I can’t see any particularly good support for this (although I wouldn’t be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are ‘investment costs’ in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don’t think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don’t have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I’m not sure how to unpick them.
I don’t think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1⁄10 of 800 and 9⁄10 of nothing. I’m pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who ‘live first’ should refrain from life extension and let the others ‘have their go’.
The nifty program is Prezi.
I didn’t particularly fill in the valuing future persons argument—in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I’d point to future calamities (which only seem plausibly really bad if future people have interests or value—although that needn’t on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn’t know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
Plausibly, depending on your view of personal identity, yes.
I won’t be identical to my copies, and so I think I’d play the same sorts of arguments I want to do so far—copies are potential people, and behind a veil of ignorance between whether I’d be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.
Disclaimer: I’m an 80,000 Hours member
This post raises some important concerns, which I and probably most members of 80,000 Hours share. For instance, how plausible is it to take a high-earning career we don’t enjoy in order to donate more money to charity. But I don’t think 80k’s ‘party line’ – or the views of its members—are accurately represented by these six claims. Essentially, we don’t believe that professional philanthropy is commonly the best thing we can do, or so we typically (generally? commonly?) should do it.
What we do claim is that many people (not everyone) could do more good through professional philanthropy than by working directly in the charity sector or other ’commonsensical ethical careers. This does not imply that it’s typically best. We more believe something along the lines of (6). For similar reasons we don’t claim (2) or (3) either.
80k also does not claim (4). We’ve made no claims about which careers give the highest expected earnings. In fact this is an ongoing research program at 80k. On the 80k blog Carl has already written a fair amount on ‘non-traditional’ options like entrepreneurship e.g. (http://80000hours.org/blog/23-entrepreneurship-a-game-of-poker-not-roulette). I think the common focus on banking is because, first, banking is at least fairly high earning (easily £6m over a career), and two, it’s morally controversial. So, if the argument flies for banking, it works even better for less morally controversial careers.
Neither does 80k claim (5), but Grognor and Unnamed have covered that better than I could.
When making the case for SI’s comparative advantage, you point to these things:
… [A]nd the ability to do unusual things that are nevertheless quite effective at finding/creating lots of new people interested in rationality and existential risk reduction: (1) The Sequences, the best tool I know for creating aspiring rationalists, (2) Harry Potter and the Methods of Rationality, a surprisingly successful tool for grabbing the attention of mathematicians and computer scientists around the world...
What evidence supports these claims?
(Further disclaimer: I’m not a spokesperson for 80 000 hours, so this isn’t the party line—take what you find on the website over me if we disagree).
Not that I’m aware of, although given a lot of 80ks message is about how ‘formally’ or ‘commonly considered’ philanthropy is not as good as more counter-intuitive means, I (and I’d guess most other folks at 80k) would be pretty sympathetic to it. I guess the closest analogue on the site would be discussion of ‘high impact PAs’. (http://80000hours.org/blog/54-the-high-impact-pa-how-anyone-can-bring-about-ground-breaking-research)
each question (posted as a comment on this page) that follows the template described below will receive a reply from myself or another SI representative.
I appreciate you folks are busy, but I’m going to bump as it has been more than a week. Besides, it strikes me as an important question given the prominence of these things to the claim that SI can buy x-risk reduction more effectively than other orgs.
I agree with Unnamed that this post misunderstands Parfit’s argument by tying it empirical claims about resources that have no relevance.
Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc. If you are interested in maximizing aggregate value you’ll happily go along with each step to Z (indeed, if you are offered all the worlds from A to Z at once an aggregate maximizer will go straight for Z. This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the ‘mechanism’ of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don’t want to be mean, but this is a really basic error.
The OP offers something much better when offering a pluralist view to try and get out of the mere addition paradox by saying we should have separate term in our utility function for average level of well-being (further, an average of currently existing people), and that will stop us reaching the repugnant conclusion. However, it only delays the inevitable. Given the ‘average term’ doesn’t dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it. Indeed, for a person affecting view we can make it so that the ‘original’ set of people in A get even better:
A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.
A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we’re off to the repugnant conclusion again (and if they don’t, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)
Aside 1: Person affecting views (caring about people who ‘already’ exist) can get you out of the repugnant conclusion, but has their own costs: Intransitivity. If you only care about people who exist, then A → A+ is permissible (no one is harmed), A+ --> B is permissible (because we are redistributing well being among people who already exist), but A --> B is not permissible. You can also set up cycles whereby A>B>C>A.
Aside 2: I second the sentiment that the masses of upvotes this post has received reflects poorly on the LW collective philosophical acumen (‘masses’, relatively speaking: I don’t think this post deserves a really negative score, but I don’t think a post that has such a big error in it should be this positive, still less be exhorted to be ‘on the front page’). I’m currently writing a paper on population ethics (although I’m by no means an expert on the field), but seeing this post get so many upvotes despite the fatal misunderstanding of plausibly the most widely discussed population ethics case signals you guys don’t really understand the basics. This undermines the not-uncommon LW trope that analytic philosophy is not ‘on the same level’ as bone fide LW rationality, and makes me more likely to account for variance between LW and the ‘mainstream view’ on ethics, philosophy of mind, quantum mechanics (or, indeed, decision theory or AI) as LWers being on the wrong side of the Dunning-Kruger effect.
Saying “the rationality minicamp was highly successful” before you have analyzed the data you have gathered to assess the success of the rationality minicamp is irrational.
If success at the minicamp is important—suggested by it listed first on Eliezer’s recommendation—why not wait until you CAN analyze the data, to see whether it really was successful, before you recommend hiring Luke? Doing so means a) you can make a more persuasive case to donors, and b) if the minicamp WASN’T successful, then one can reconsider the hire.
The fact this plug happened before the analysis signals Eliezer is committed to recommending Luke’s hire regardless of whether analysis shows the minicamp as successful or not. And if HE doesn’t think the minicamp success is relevant to the merits of hiring Luke, why is he using it to persuade us?
Disclaimer: I think Luke has added lots of value to this site, and I would be unsurprised if later transparent analysis showed the minicamp to be highly successful. But the OP (as well comments switching various reasons/excuses for failing to present data, etc. etc.) is suggestive of irrational salesmanship. Perhaps a salutatory lesson that rationality experts still succumb to bias?