If memory serves, you’ve said that your plan is to wait until your parents die and then kill yourself. Even if you do that and donate your organs, you should cryopreserve your head for a chance at waking up in a world you’d want to live in or could better help you with that. It’s much worse a strategy than just trying to live to see it, but still better than final death.
hirvinen
I don’t know of such cases. From http://www.alcor.org/Library/html/neuropreservationfaq.html
“Neuroseparation” is performed by surgical removal of the body below the neck at the level of the sixth cervical vertebra at a temperature near 0ºC. - - The cephalon (head), is then perfused with cryoproectants via the carotid and vertebral arteries prior to deep cooling. For neuropatients cryopreserved before the year 2000, neuroseparation was performed at the end of cryoprotective perfusion via the aorta.
If I understand correctly, at least Alcor’s current procedure for neuropreservation would be compatible with removing organs to be donated.
Using the martial arts metaphor, at least Mensa appears to be more about having a lot of muscle, not about fighting skills, and there isn’t a strong agenda to improve either.
Looking into U.S. political parties especially beyond the big two doesn’t look like a good use of my time. Consider replacing that with the scores from the World’s smallest political quiz
Strongly disagree on Political Compass being better. The questions are heavily loaded, the very first question being
If economic globalisation is inevitable, it should primarily serve humanity rather than the interests of trans-national corporations.
and many questions such as
Astrology accurately explains many things.
aren’t at all about what should be done or what should be the state of things. What are you going to infer about my political beliefs based on my answer to that?
(Edited to fix formatting.)
The likelihood of an existential risk actualizing during this century.
The Political Compass seems to me, based on my own and friends’ experiences to have a strong pressure towards the lower left corner. As one of them said, “you would have to want to sacrifice babies to corporations to end up in the upper right corner.”
The World’s Smallest Political Quiz isn’t entirely neutral, but to me it would seem to spread people much more evenly, and importantly all questions are clearly on the two axis along which it measures political stance.
Does the first AGI have to be Friendly, or we’re screwed?
The likelihood of the creation of an AGI leading to an intelligence explosion?
ETA: The likelihood of human uploads leading to an intelligence explosion?
Related, but different: Which of these world-saving causes should receive most attention? (Maybe place these in order.)
Avoiding nuclear war
Create a Friendly AI, including prevention of creating AIs you don’t think are Friendly
Create AI, no need to be Friendly.
Prevent creation of AIs until humans are a lot smarter
Improve human cognition(should this include uploading capabilities?)
Defense against biological agents
Delay nanotechnology development until we have sufficiently powerful AIs to set up defenses against gray goo
Creation and deployment of anti- gray goo nanotechnology
Avoiding environmental hazards
Space colonization
Fighting diseases
Fighting aging
something else?
I think we mean here by existential risks something alone the lines of, in Bostrom’s words ” - - either annihilate Earth-originating intelligent life or drastically and permanently curtail its potential”, making countries irrelevant.
Thank you. It’s been a truly wonderful time. Not thanks to you alone, even if you were the driving factor. It will be difficult for anyone to fill your shoes, but then again, LW has shown many others having great promise, well enough that it can become a community much greater than it already is, and thus meaning success for you in this endeavour.
While I’m sad to see you give up your central role, for yours are the posts that I’ve in general found to be the most eye-opening and enjoyable, it is a also a relief to see you returning focus to the core job of SIAI, as it indicates greater confidence about your chances of success in that. Still it would be interesting to hear why you considered this detour from concentrating on FAI so important to do at the point you did it.
Less posts by Eliezer is bad.
Less work on Things Not To Be Discussed before May is much worse.
- 4 Jul 2012 2:06 UTC; 4 points) 's comment on Open Thread, July 1-15, 2012 by (
(Osmo A.) Wiio’s first law of communication
Communication usually fails, except by accident
It works in Konqueror and apparently would in Opera if mine didn’t have a general problem with youtube. Not in Firefox, though.
That “way too many” sounds more like “in retrospect, I can’t believe how much whacking it took to convince me / how thickheaded I was.”
At least that’s how I feel about it WRT myself.
Thanks to fast internet connections, good web search and online dictionaries, failing to expand an acronym only increases the cost from 5 seconds to 5 seconds per reader...
sometimes it really is pretty obvious that a particular error has been committed
The degree or lack of obviousness is a fact about the reader’s mind, not about the error.
http://lesswrong.com/user/Annoyance is currently reported as having a karma of 2^32 + 437