Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?
(one that doesn’t originate with a cryonics institute)
Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?
(one that doesn’t originate with a cryonics institute)
I’m struck with Dumbledore’s ruthlessness
Actually I think he was just following his own advice:
While survives any remnant of our kind, that piece is yet in play, though the stars should die in heaven. [...] Know the value of all your other pieces, and play to win.
All things considered I think it was the most compassionate choice he could have made.
As far as I understand it, CFAR’s current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they’re a good way to test which bits of rationality work and determine the best way to teach them.
In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world
In response to the question “Are you trying to make rationality part of primary and secondary school curricula?” the CFAR FAQ notes that:
We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.
So I’m fairly sure they agree with you on the importance of making broad improvements to education. It’s also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you’re feeling charitable.
However they go on to say:
At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.
Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they’re doing at the moment—building a ‘minimum strategic product’. Giving “semi-advanced cognitive self-improvement workshops to the Silicon Valley elite” is just a convenient way to test this stuff.
You might argue for giving the rationality workshops to “people who have not even heard of the basics” but there’s a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they’ve probably already hear of the basics). Then there’s the fact that rationality just isn’t viewed as useful in the eyes of the general public, so most people won’t care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.
Mind, if what you’re really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.
I don’t think CFAR is aiming to propagandize any worldview; they’re about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I’m curious about why you think they might be (intentionally or unintentionally) doing so.
Your application form link is broken!
As far as I know there’s no single sure-fire way of making sure that asking them won’t put them in a position where refusal will gain them negative utility (for example, their utility function could penalize refusing requests as a matter of course) . However general strategies could include:
Not asking in-front of others, particularly members of their social group. (Thus refusal won’t impact upon their reputation.)
Conditioning the request on it being convenient for them (i.e. using phrasing such as “If you’ve got some free time would you mind...”)
Don’t give the impression that their help is make or break for your goals (i.e. don’t say “As you’re the only person I know who can do [such&such], could you do [so&so] for me?”)
If possible do something nice for them in return, it need not be full reciprocation but it’s much harder to resent someone who gave you tea and biscuits, even if you were doing a favor for them at the time.
Of course there’s no substitute for good judgement.
At least in my ideal poster-space this hierarchy would be included. Considering that knowing about biases can hurt people having a general theme of how to resolve arguments better rather than “here are some fallacies, avoid them/point them out” can’t hurt.
Firstly let me just say: this is brilliant.
I’m not sure those words quite encapsulate the awesome but they’ll have to do. Kudos on putting it all together, very well executed.
This reminds me of something I heard at a Secular Society talk a while back about a minister who identified as a Christian-Atheist. Reportedly he promoted religion as a human-made construct rather than a set of beliefs about how the world is.
Though on avoiding the whole becoming-cult thing it might be an idea to change the theme yearly? I mean Lovecraft is beyond epic, but having a different “Santa” each year might help to counter possible cult-ishness. Also having a different range of rationality-inspired literature each year should help toward the same goal. Though those are just suggestions of course.
Preemptive Solution: Leave a line of retreat, make sure that there is little/no cost for them if they choose to refuse; thus reducing the likelihood that they will help you out of compulsion.
no, no, No, NO, NO!
That is not what a paradox does. More importantly, saying rationality is a matter of degrees is nothing like saying that there are multiple equally valid truths to a situation.
It’s called the Fallacy of Grey, read about it.
TLDR: I managed to fix my terrible sleep pattern by creating the right habits.
I’ve been there, up until a month ago actually.
I’ve tried a whole slew of things to fix my sleeping pattern over the past couple of years. F.lux, conservative use of melatonin, and cutting down on caffeine all helped but none of them really fixed the problem.
What I found was that I’d often stay up late in order to get more done, and it would feel like I was getting more done (where in actual fact I was just gaining more hours now in exchange for losing more hours in the future). Alongside this my pattern was so hectic that any attempt to sleep at a “normal” time was thwarted by a lack of tiredness, I could use melatonin to ‘reset’ this, but it’d rarely stay that way.
The first thing that helped was sitting down and working out hour by hour how much time I actually have in a week; this prevented me from thinking I could gain more time by staying up later. The second thing was forming good habits around my sleep. Habit’s typically follow a trigger-routine-reward pattern and require fairly quick feedback. As a result building a habit where the routine is sleeping for eight hours is quite hard.
Instead I appended two patterns either side of the time I wished to sleep, the first with the goal of making it easier for me to sleep, and the second with the goal of making it easier for me to get up.
The pre-sleep pattern followed:
Cue: ‘Hey it’s 10:30pm’
Routine:Turning off technology->Reading->Meditation
Reward: Mug of hot-chocolate
While the post-sleep pattern followed:
Cue: Alarm goes off,
Routine: Get out of bed.
Reward: Breakfast.
Since doing this I’ve been awake at 8 am every morning with little trouble, and the existence of those habits has made easy to add other habits into my routine. Breakfast, for example, is now a cue to go out running on days when I don’t have lectures (this is very surprising for me, I’ve received several comments along the lines of “Who are you and what have you done with the real you” since I began doing this).
I hope you find this useful.
It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.
The AI researcher saw the word ‘Singularity’ and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil’s “accelerating change” technology curves.
What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements,” doesn’t necessarily make it obvious enough that SI isn’t about “accelerating change”. (In fact, it sounds a lot like an accelerating-change-type idea.)
In my opinion at least, you need to get any potential supporter/critic to make the association between the name “Singularity Institute” and what SI actually does(/it’s goals) as soon as possible. While changing the name could do that, “Singularity Institute” has many useful aesthetic qualities that a replacement name probably won’t have.
On the other hand doing something like adding a clear tag-line about what SI does (e.g. “Pioneering safe-AI research”) to the header, would be a relatively cheap and effective solution. Perhaps rewriting the concise summary to discuss the dangers of a smarter-than-human AI before postulating the possibility of an intelligence explosion would also be effective; seeing as a smarter-than-human AI would need to be friendly, intelligence explosion or no.
I wouldn’t worry too much about it if I were you. Around your age I experienced the same kind of lack-of-feeling, a year or so later so did a friend of mine. However it passed for both of us. Retrospectively I suspect a great deal of the problem was that neither me nor my friend had invested time or effort into anything. Try working on something or executing a plot, also read http://lesswrong.com/lw/bq0/be_happier/ .
Also, reading good books works.
Are you sure precommitment is a useful strategy here? Generally the use of precommitments is only worthwhile when the other actors behave in a rational manner (in the strictly economic sense), consider your precommitment credible, and are not willing to pay the cost of you following through on your precommitment.
While I’m in no position to comment on how rational your parents are, it’s likely that the cost of you being upset with them is a price they’re willing to pay for what they may conceptualize as “keeping you safe”, “good parenting” or whatever their claimed good intentions were. As a result no amount of precommitment will let you win that situation, and we all know that rationalists should win.
The optimal solution is probably the one where your parents no longer feel that they should listen to your phone calls or use physical coercion in the first place. I couldn’t say exactly how you go about achieving this without knowing more about your parents’ intentions. However you should be able to figure out what their goal was and explain to them how they can achieve it without using force or eavesdropping on you.
Good point, thanks, fixed it.
A newborn’s brain can be specified by about 10^9 bits of genetic information
While the brain of a new-born baby can be generated by 10^9 bits of genetic information, it’s not true that this is enough to suitably specify a particular new-born’s brain. This is because of the large impact that conditions in the womb have on brain development (e.g. drugs&alcohol) and the limited extent to which brain structure is inherited.
However it’s quite likely that specific sections of the genome contribute to brain development, meaning that your lower bound for how much information it takes to generate a new-born’s brain is (*probably!) much lower than 10^9 bits. Though this still won’t be enough information to specify a particular new-born’s brain, just enough to considerably narrow-down the region of brain-structure-space that the new-born’s brain can occupy.
*Don’t take my word for this – I don’t know nearly enough to substantiate that claim.
Also, on a tentative note, it might be worth comparing scans of a brain before and after it’s been cryogenically preserved in order to see if it’s possible to tell the difference (and subsequently if the data from the pre-freezing brain can be approximated from the post freezing brain data).
True! And that would be a tragedy.
Nice article, have a karma!
There’s a lot of information there, I’d suggest perhaps using this article as the basis for a four part series one each area. The content is non-obvious, so having the extra space to really break down the inferential distance into small steps so that the conclusions are intuitive to non-rationalists would be useful.
(As an aside I suspect that writing for the CFAR blog is right now reasonably high impact for the time investment. Personally I found CFAR’s apparent radio-silence since September unnerving and it’s possible that it was part of the reason the matching fundraiser struggled. Despite Anna’s excellent post on CFAR’s progress the lack of activity may have caused people to feel as though CFAR was stagnating and thus be less inclined to part with their money on a System 1 level.)
Fixed it. I don’t think I’ve ever consciously registered that adsorb != absorb, so thanks for that.
Donated $180.
I was planning on donating this money, my yearly ‘charity donation’ budget (it’s meager—I’m an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:
The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don’t quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.
A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn’t. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.
While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR’s continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.
CFAR isn’t a typical EA cause. This means that while if I don’t donate to keep AMF going, another EA will. However if I don’t donate to keep CFAR going there’s a reasonable chance that someone else won’t. In other words my donations to CFAR aren’t replaceable.
To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there’s the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR’s work over the next decade will save more than 800 lives? the answer becomes a definite yes.