please go ahead!:) and let me know if you have trouble editing the wiki
I guess the best way is just to post on a bunch of platforms and have those web pages back-up by various archiving services (notably the Wayback Machine).
Publishing a book is probably too much work for most used cases.
I’ve also heard one could publish text on the Bitcoin blockchain for example, but I’m not sure how well that works.
Sweet! I added it to the list of podcasts by people with a LessWrong profile: https://wiki.lesswrong.com/wiki/List_of_Podcasts#By_people_with_Less_Wrong_profiles
See section “Games and Exercises” of “How to Run a Successful LessWrong Meetup Group” for some ideas: https://wiki.lesswrong.com/mediawiki/images/c/ca/How_to_Run_a_Successful_Less_Wrong_Meetup_Group.pdf
Because it’s near the best cryonics facility in the world: https://alcor.org, and the quality of cryopreservations for people living in Phoenix is much higher in average than remote cases (it reduces the delay to start the procedure, it avoids problems at borders, the delay to start the sub-zero cool-down is shorter, they have good relationships with nearby hospitals, they have better infrastructure, and there’s more legal antecedent supporting cryonics).
This summer I went to Phoenix for about a month to see how it was. I organized the first local effective altruism event: https://www.facebook.com/groups/EffectiveAltruismPhoenix/. I reached out to the LessWrong group: https://www.facebook.com/groups/317266081721112/ and the SlateStarCodex group: https://groups.google.com/forum/#!forum/slate-star-codex-phoenix . There are 4 people in the Brain Debugging Discussion Facebook group that specified living in Phoenix on their Facebook profile: https://www.facebook.com/groups/144017955332/local_members/ , 1 on the Effective Altruism Facebook group: https://www.facebook.com/groups/437177563005273/local_members/ , 0 on EA Hub: https://eahub.org/profiles , and 7 on the Global Transhumanist Association: https://www.facebook.com/groups/2229380549/local_members/ . IIRC, I had reached out to (some of) them as well (and probably more). I also had invited people from the cryonics community. IIRC, there was 2-3 rationalists and 3 cryonicists that showed up to the event. And maybe around 5 that were interested but couldn’t make it. IIRC, there had been a few SSC events in the last 2 years, with maybe a total of something like 12 people showing up. I’ve also met with about 20 cryonics old-timers.
Other approaches I see towards solving this problem:
do movement building once I’m Phoenix, or support other people that are interested in doing that
try to connect more with rationalists (or rationalists adjacent) that are already in Phoenix
instead of finding 75 interesting (to me) people, find only a dozen, but start a strong intentional community
here’s one project proposal for this idea: https://docs.google.com/document/d/1JdZ1lnXwoJatofsYa-oVKM0b55ev9EFr081Twd0YebI/ (this is just an idea; I’m much more flexible than that, and my interests are wider); I visited a lot of intentional communities, and have been running one for 3 years ( https://macroscope.house/ ), so I think I would have the expertise to start a new one
significantly improve the cryonics response quality in other cities (current contenders: Salem, Berkeley, Los Angeles)
If you (or anyone you know) are interested or can help with any of those, that would be great/appreciated!
How many rationalists / EAs / interesting people do you know in Phoenix? Do you like living in Phoenix?
I would like to connect with more LessWronger in Phoenix. If you want, you can add me on Facebook: https://www.facebook.com/mati.roy.09 and/or send me an email at email@example.com and/or chat in public on https://www.reddit.com/r/Mati_Roy/ .
Allow to edit one’s username (context: I now go by Mati_Roy instead of MathieuRoy, but I don’t want to create another account and loose my history).
I added this thread here: https://causeprioritization.org/Coordination
Meta note: I’ll likely edit this answer when I think of more answers.
General note: If you’re interested in any of the propositions below (except the first one), please let me know, either here or at firstname.lastname@example.org .
Bootstrapping a commitment platform
Make at least 5 commitments if a commitment platform is created (or rather, the creator might want to commit to improving a bare-bone platform if at least 200 people commit to make a total of at least 1,000 commitments).
Improving the Cause Prioritization Wiki (CPW)
Migrate the CPW on the MediaWiki platform and improve the structure if enough people commit edits for a total of 2000+ edits.
Side note: I’ve added this thread here: https://causeprioritization.org/Coordination
Moving to Phoenix
If 75 EAs / rationalists / life extensionists committed to move to Phoenix this year, I’d move to Phoenix this year.
Financing cryonics research
If 500 other people committed 10,000 USD to cryonics research, I would give 10,000 USD to cryonics research.
Doing a cryonics related PhD
I would do a PhD in some field relevant to cryonics if some people committed a fraction of my salary to do cryonics research over 10 years. That is, they would give say 20% (or 10k USD / year) of my salary to whatever cryonics lab that hires me.
Training a local cryonics team
I would arrange to have a local (to Montreal) standby cryonics team if at least 500,000 CAD was committed (exact amount TBD). (Although I guess I could just use Kickstarter for that, or do it entirely ad hoc?)
Organizing Rationalist Olympiads
If 12+ people committed to go to Rationalist Olympiads (in Montreal), I would organize Rationalist Olympiads.
How would you classify existential risks within this framework? (or would you?)
Here’s my attempt. Any corrections or additions would be appreciated.
Transparent risks: asteroids (we roughly know the frequency?)Opaque risks: geomagnetic storms (we don’t know how resistant the electric grid is, although we have an idea of their frequency), natural physics disasters (such as vacuum decay), killed by an extraterrestrial civilization (could also fit black swans and adversarial environments depending on its nature)Knightian risks:- Black swans: ASI, nanotech, bioengineered pandemics, simulation shutdown (assuming it’s because of something we did)- Dynamic environment: “dysgenic” pressures (maybe also adversarial), natural pandemics (the world is getting more connected, medicine more robust, etc. which makes it difficult how the risks of natural pandemics are changing), nuclear holocaust (the game theoretic equilibrium changes as we get nuclear weapon that are faster and more precised, better detectors, etc.)- Adversarial environments: resource depletion or ecological destruction, misguided world government or another static social equilibrium that stops technological progress, repressive totalitarian global regime, take-over by a transcending upload (?), our potential or even our core values are eroded by evolutionary development (ex.: Hansonian em world)
Other (?): technological arrests (“The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.” from https://nickbostrom.com/existential/risks.html )
From the original article by Nick Bostrom: “Reductions existential risks are global public goods  and may therefore be undersupplied by the market . Existential risks are a menace for everybody and may require acting on the international plane. Respect for national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk.” See: https://nickbostrom.com/existential/risks.html
related comment: https://forum.effectivealtruism.org/posts/F7hZ8co3L82nTdX4f/do-eas-underestimate-opportunities-to-create-many-small#XGSQX45NkAN9qSB9A
it’s certainly interesting from the perspective of the Doomsday Argument if advanced civilizations have a thermodynamic incentive to wait until nearly the end of the universe before using their hoarded negentropy
Related: That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox (https://arxiv.org/pdf/1705.03394.pdf)
Assuming this is all true, and that Benevolent ASIs have the advantage, in finite universes, it’s worth noting that this still requires the Benevolent ASIs to trade-off computations for increasing the lifespan of people to computations to increase the fraction of suffering-free observer-moments.
EA safety community
Have you published the results?
Might be of interest to some readers:
Spliddit’s goods calculator fairly divides jewelry, artworks, electronics, toys, furniture, financial assets, or even an entire estate between two or more people. You begin by providing a list of items that you wish to divide and a list of recipients. We then send the recipients links where they specify how much they believe each item is worth. Our algorithm uses these evaluations to propose a fair division of the items among the recipients.