“Not being able to get the future exactly right doesn’t mean you don’t have to think about it.”
--Peter Thiel
“Not being able to get the future exactly right doesn’t mean you don’t have to think about it.”
--Peter Thiel
I took the survey, and wanted it to be longer.
I just want to mention how much I appreciate these threads: this is my most trusted source of media recommendations! Thank you to all involved.
If you can’t appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
I don’t think it’s been asked before on Less Wrong, and it’s an interesting question.
It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.
The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.
Additionally, while it’s true that it’s unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don’t join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.
And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.
There are also altruistic reasons to join Alcor, but that’s a separate issue.
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
I wanted to love this post, but stylistic issues got in the way.
It read too much like a gwern essay: certainly interesting, but in need of a summary and a guide for how it is practically applicable. A string of highlights and commentary with no clear underlying organization and conclusion is not optimally useful.
That being said, I appreciate you taking the time to create this post, as well as your call for constructive criticism.
Well, does time permit?
Pacific Rim pleasantly surprised me. I could list the manifold ways this movie dramatizes how to correctly deal with catastrophic risks, but I don’t want to spoil it for you.
Plus it is awesome, in both senses of the word.
I recently noticed “The Fable of the Dragon-Tyrant” under the front page’s Featured Articles section, which caused me to realize that there’s more to Featured Articles than the Sequences alone. This particular article (an excellent one, by the way) is also not from Less Wrong itself, yet is obviously relevant to it; it’s hosted on Nick Bostrom’s personal site.
I’m interested in reading high-quality non-Sequences articles (I’m making my way through the Sequences separately using the [SEQ RERUN] feature) relevant to Less Wrong that I might have missed, so is there an archive of Featured Articles? I looked, but was unable to find one.
One possible alternative would be choosing to appear in the Americas.
Good idea. (Note, if you haven’t seen the film, here’s a spoiler-heavy synopsis).
My line of thought:
Gur pngnfgebcuvp (yngre erirnyrq gb or rkvfgragvny) evfx bs gur svyz vf gur svpgvbany bar bs tvnag zbafgref, be xnvwh, nggnpxvat uhzna cbchyngvba pragref, ohg greebevfz hfvat shgher jrncbaf, nagvovbgvp erfvfgnapr, pyvzngr punatr, rgp. pna pyrneyl or fhofgvghgrq va.
Tvnag zbafgref pna jbex nf na rfcrpvnyyl ivfpreny qenzngvmngvba bs pngnfgebcuvp evfxf, nf abg bayl Cnpvsvp Evz ohg gur bevtvany, harqvgrq, Wncnarfr-ynathntr irefvba bs gur svyz Tbwven (gur frzvany xnvwh zbivr) qrzbafgengr. V jbhyqa’g or fhecevfrq vs jr vafgvapgviryl srne ynetr, ntterffvir cerqngbef zber fb guna, fnl, punatrf va nirentr grzcrengher gung nttertngr bire gur pbhefr bs qrpnqrf, be cnegvpyrf fb fznyy jr pna’g frr gurz jvgubhg zvpebfpbcrf.
Tbqmvyyn’f (uvtuyl tencuvp, crbcyr ohea nyvir) qrfgehpgvba bs Gbxlb ercerfragf gur Jbeyq Jne VV Nzrevpna sverobzovat bs gung pvgl naq gur ahpyrne nggnpxf ba Uvebfuvzn naq Antnfnxv. Gur punenpgref bs Tbwven rkcyvpvgyl yrnir bcra gur cbffvovyvgl bs shgher xnvwh (ernyyl jnef, ahpyrne rkpunatrf, naq gur raivebazragny rssrpgf bs ahpyrne jrncbaf grfgvat) jrnevat uhznavgl qbja hagvy gurer vf abguvat yrsg.
Ubjrire, gubhtu Tbwven raqf ba n crffvzvfgvp abgr, Cnpvsvp Evz qbrf abg. Urer, pbbcrengvba vf fhssvpvrag gb birepbzr naq ryvzvangr gur guerng. Gur zbivr’f gvgyr vzcyvrf gur arprffvgl bs vagreangvbany pbbcrengvba, ohg gur vzcebivat eryngvbafuvcf orgjrra gur Wnrtre cvybgf nyfb qrzbafgengr gur gurzr bs pbbcrengvba qbja gb gur vagrecrefbany yriry. Guvf vf rfcrpvnyyl qenzngvmrq ol gur pbaprcg bs Qevsgvat, gur qrrcyl crefbany, pnershyyl rkrphgrq, OPV-ranoyrq flapuebavmngvba (ba nyy yriryf: zragny, rzbgvbany, naq culfvpny) arprffnel gb cvybg Wnrtref naq guhf qrsrng gur xnvwh.
Jura uhznaf nggrzcg gb svtug tvnag zbafgref (naq pngnfgebcuvp evfxf) nybar, gurl ghea bhg rvgure qrnq be onqyl qnzntrq, obgu culfvpnyyl naq cflpubybtvpnyyl. Nf gur qverpgbe, qry Gbeb, chg vg va na vagreivrj: (ersreevat gb bar bs gur pbasyvpgf orgjrra gur Wnrtre cvybgf): “Gung thl lbh jrer orngvat gur fuvg bhg bs gra zvahgrf ntb? Gung’f gur thl lbh unir gb jbex jvgu svir zvahgrf yngre” naq (zber trarenyyl) “Rvgure jr trg nybat be jr qvr”.
Ng svefg gur uhznaf qb fb jryy ntnvafg gur xnvwh gung, bs pbhefr, gurl trg birepbasvqrag naq pbzcynprag, naq gura gur gvqr bs jne gheaf ntnvafg gurz nf gur xnvwh nqncg gb uhzna qrsrafrf. Vafgrnq bs vzcebivat gur nyernql rkcrafvir Wnrtre cebtenz be nggrzcgvat bgure npgvir nccebnpurf gb gur xnvwh ceboyrz gung zvtug cbffvoyl snvy, srneshy tbireazrag ohernhpengf pubbfr gur “fnsr” cnffvir bcgvba naq ortva pbafgehpgvba bs znffvir jnyyf nebhaq nyy gur znwbe pvgvrf bs gur Cnpvsvp Evz.
Jura vg orpbzrf pyrne gung guvf fgengrtl jvyy abg jbex, fbzr crbcyr qrfcnve, fbzr svtug rnpu bgure, fbzr ergerng gb nabgure ynlre bs jnyyf pbafgehpgrq shegure vaynaq, naq fbzr ernpg jvgu nccebcevngr, gubhtu qrfcrengr, npgvivgl. Nf bar punenpgre, Fgnpxre, chgf vg: “Unira’g lbh urneq? Vg’f gur raq bs gur jbeyq. Jurer jbhyq lbh engure qvr? Urer, be va n Wnrtre pbpxcvg?” Guvf vf gur pbeerpg erfcbafr gb pngnfgebcuvp naq rkvfgragvny evfxf.
Gur synfuonpx fprar vaibyivat gur ivbyrag qrngu bs Znxb’f snzvyl jura fur vf n fznyy puvyq, gur oyrnx, qrinfgngrq Gbxlb naq ure greevslvat arne-qrngu nf n tvnag zbafgre gnxrf n cnegvphyne vagrerfg va ure rssrpgviryl fubjf gur uhzna pbfg bs pngnfgebcuvp evfxf ba na rzbgvbany yriry, nf fpbcr artyrpg vf bs pbhefr na vffhr urer.
Hygvzngryl, jung’f yrsg bs na haqrefgnssrq, haqreshaqrq Wnrtre cebtenz (n snzvyvne ceboyrz sbe rkvfgragvny evfx betnavmngvbaf) fhpprffshyyl chefhrf abg bayl na npgvir qrsrafr ohg na bssrafvir fgengrtl vagraqrq gb raq xnvwh nggnpxf bapr naq sbe nyy. (Erzvaqf zr bs gur Sevraqyl NV nccebnpu gb abg bayl HSNV ohg gb pngnfgebcuvp naq rkvfgragvny evfxf va trareny). Gurl qb fb va bccbfvgvba gb gur pbairagvbany jvfqbz, jvgu pbbcrengvba, jvgu pbhentr, jvgu zngurzngvpf, naq jvgu fpvrapr naq grpuabybtl.
“Gbqnl ng gur rqtr bs bhe ubcr, ng gur raq bs bhe gvzr, jr unir pubfra gb oryvrir va rnpu bgure. Gbqnl jr snpr gur zbafgref gung ner ng bhe qbbe, gbqnl jr ner pnapryyvat gur ncbpnylcfr!” N zrffntr gung, gubhtu purrfl, cyrnfnagyl fhecevfrf naq vafcverf zr guebhtu vgf bcgvzvfz va guvf ntr bs plavpvfz.
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they’ll work for them.
To add to Principle #5, in a conversational style: “if something exists, that something can be quantified. Beauty, love, and joy are concrete and measurable; you just fail at it. To be fair, you lack the scientific and technological means of doing so, but - failure is failure. You failing at quantification does not devalue something of value.”
Advice from the Less Wrong archives.
Check out this FDA speculation.
Scott Alexander comments here.
Try some exposure therapy to whatever it is you’re often afraid of. Can’t think of what you’re often afraid of? I’d be surprised if you’re completely immune to every common phobia.
Michaelcurzi’s How to avoid dying in a car crash is relevant. Bentarm’s comment on that thread makes an excellent point regarding coronary heart disease.
There is also Eliezer Yudkowsky’s You Only Live Twice and Robin Hanson’s We Agree: Get Froze on cryonics.
I have a few questions, and I apologize if these are too basic:
1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?
2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?
3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities?
Edited to add: thanks for responding!
Taken. Wasn’t bothered by the length—could be even longer next time.