“The Eliezer Yudkowsky Center for Kids Who Can’t Think Good and Wanna Learn to Do Other Stuff Good Too”
hankx7787
“Sorry Arthur, but I’d guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.”—Dan Clemmensen, SL4
A few of my favorite posts in this thread:
And Death is not something I will ever embrace.
It is only a childish thing, that the human species has not yet outgrown.
And someday...
We’ll get over it...
And people won’t have to say goodbye any more…And someday when the descendants of humanity have spread from star to star, they won’t tell the children about the history of Ancient Earth until they’re old enough to bear it; and when they learn they’ll weep to hear that such a thing as Death had ever once existed!
Do not go gentle into that good night. Rage, rage against the dying of the light!
done!
You might want to post this over at the Immortality Institute (http://imminst.org/). They recently created a multivitamin using community input, so all of this has been discussed over and over again there already.
Luke/SI asked me to look into what the academic literature might have to say about people in positions of power.
Why?
FYI, I’m now friends with this girl on facebook. She has posts going back to 2005 and ample evidence that she is legit. But I did not need to see this to know she was legitimate. I highly recommend you re-evaluate whatever cognitive process you were using that led to such over-skepticism… you obviously need to update something.
“There are lots of people who think that if they can just get enough of something, a mind will magically emerge. Facts, simulated neurons, GA trials, proposition evaluations/second, raw CPU power, whatever. It’s an impressively idiotic combination of mental laziness and wishful thinking.”—Michael Wilson
Pick a video game. Preferably something with a lot of consistent imagery/gameplay. A racing game running the same map would be a great example.
Play this video game from when you wake up to when you go to bed, with minimal time for breaks or distractions.
After hours of having these images burned into your retinas, randomly try closing your eyes for just a moment or two and rest your brain every once in a while.. When I’m playing video games intensely and then I shut my eyes, sometimes it’s like I never even shut them in the first place—all the images are still mostly there and still mostly behaving as I’ve been watching them behave (e.g. I’m involuntarily visualizing the walls rushing by as I make turns in the race, etc. My eyes feel funny and then I realize they are actually closed! holy crap! etc...).
I’ve found first-person games like FPS or racing games to be the most intense and reliable in producing this effect. You might also get better results at different times of the day (e.g. alone in a quiet room in the middle of the night). But it works with any game, or really just in general I can close my eyes and have the scene flashing in my mind.
Here’s another thing to try:
Walk casually around your house or another familiar area. With your eyes closed. Only once every few seconds (or when you think you really need to) - as quickly as you can, blink your eyes open and instantly shut again. Try to retain as much information as possible for the next few seconds of your blind walk so you don’t run into things or step on things. You will be amazed at how normally you can perform with scant visual information.
er, also note that trying to visualize something from nothing can be extremely hard. For example I cannot look at someone’s face and then imagine them in all kinds of new facial expressions that I’ve never actually seen on them before. If you try to change something you’re not really equipped to visualize, it will just seem like an amorphous blob or an abstract symbolic designation rather than striking visual imagery. If this happens with something like geometry, that probably means you just need to spend more time trying until you get it, but don’t be too surprised if you’re trying to visualize this massively detailed scene of real-life visualizations, and things just aren’t as vivid as you like. Visualizing geometry and relatively abstract scenes is way, way less demanding than trying to manipulate the full visual resolution of real-life images in your mind.
evidently less wrong lacks a sense of humor :P
Completely wrong.
As a software engineer at a company with way too much work to go around, I can tell you that making a “good effort” goes a long way. 90% of the time you don’t have to “make it work or get a zero”. As long as you are showing progress you can generally keep the client happy (or at least not firing you) as you get things done, even if you are missing deadlines. And this seems very much normal to me. I’m not sure where in the market you have to “make it work or get a zero”. I’m not even convinced that exists.
You’re wrong in almost every way, and even though your post is essentially flaming rhetoric and fails to address anything in the linked-to post or make any substantive claims at all, I’ll still try to make a few points just because I have to at least say something.
Cryonics was definitely a scam when the first organization was established.
I’ve listened to some of the founders talk about what it was like when they first started. They were a small group of people who righteously believed in their cause, but had no money or organization. They pulled together in many amazing ways, at one point having to keep someone on ice in someone’s bathtub before they could get a real solution, and winning amazing and unprecedented legal victories by pulling together and fighting for their cause. This is the sort of story I’ve heard. What are you even referring to? Or is your opinion just some random crap you pulled out of your ass which has no relation to reality (which is what I suspect)?
very much unlikely to provide any significant life extension
If you want to argue it’s a bad bet, fine. I would disagree, but your free to have your own opinion.
The cryopreservation process causes significant brain damage, due to ischemia, cryoprotectant toxicity, mechanical stress caused by thermal contraction and possibly ice formation (its unclear whether they can achieve full vitrification of a human brain).
How much damage does burial or cremation cause?
Even if the process was in principle capable of preserving enough information to restore the self, there are significant chances that they may not perform it properly, since it entails difficult and time-critical procedures, and they work without any independent oversight and clearly have no incentive to report errors and mishaps.
The implication being that the folks running cryonics organization are frauds just out to make money and don’t give a damn about the patient? Another baseless and insulting accusation.
Even if the preservation process works in principle and they performed it correctly, there are no known or even realistically foreseable technologies that would allow restoration. Belief in magical nanotechnology is just blind faith.
There is nothing magical about the prospects of nanotechnology. There are no assumptions that we will discover free energy, cold fusion, or need anything that we know violates the laws of physics. If you’re not going to point out exactly what is magical about widely held beliefs about the prospects of future technology then it’s safe to assume this is yet another opinion pulled out of your ass.
Even if restoration technology becomes available, it is far from obvious that future people will have an incentive to restore cryopreserved people, particularly at large scale.
The continued existence of cryonics organizations with their current policies provides for reanimation. In addition there are many perpetual trusts that provide redundant mechanisms for insuring reanimation is provided for. Finally, what exactly does this say about your view of humanity? If you had a stable but preserved medical patient, and came up with a way to cure them, would you save their life, or just throw people away like garbage? If the latter, what the hell is wrong with you? Most people would not do that. Also see http://alcor.org/FAQs/faq07.html#today
Last but not least, the financial structure of cryonics organization is dubious, resembling Ponzi/pyramid schemes. The long-term viability of these organizations is questionable.
Do you even know what a pyramid or Ponzi scheme is? A cryonics organization charges people the money required to perform the services they offer. They are very open about their financials. And yeah, the long-term viability of anything is questionable, but personally I don’t believe the long-term viability of everything is certainly doomed.
Personally, I am not in a financial position to engage in philanthropy. I contributed $100 to her (and I contributed $100 to thefirstimmortal on the immortality institute forums, who did get cryopreserved with the Cryonics Institute after dying of cancer shortly thereafter), because I will always help someone who is terminal and begging for cryo. This girl is literally begging for her life. I hope to meet her someday in the distant future...
(As a side note, everyone should get started signing up for cryonics BEFORE anything bad happens—like now! I highly recommend just giving Rudi Hoffman a call. He makes it easy.)
This is the best answer I’ve seen so far. At the risk of losing karma, I’ll point out nevertheless that America is the land of libertarian individualism like no other, which in my opinion explains everything.
If you are older you should definitely be focusing on strategies for biological life extension (calorie restriction, or whatever), and everyone should sign up for cryonics as an insurance policy.
Ultimately, with full molecular nanotechnology, whether the engineering of negligible senescence is biological or digital is rather beside the point (“What exactly do you mean by ‘machine’, such that humans are not machines?”—Eliezer Yudkowsky).
However, Unfriendly AI would render the whole point moot. So the most important thing is to guarantee we get Friendly AI right.
“In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond “audacity” as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.”—Eliezer Yudkowsky
Yeah, and also the Complexity of value sequence.
“What exactly do you mean by ‘machine’, such that humans are not machines?”—Eliezer Yudkowsky
Well I didn’t really substantively defend my position with reasons, and heaping on all the extra adjectives didn’t help :P
I was trying to figure out how to strike-through the unsupported adjectives, now I can’t figure out how to un-retract the comment… bleh what a mess.
While I still agree with all the adjectives, I’ll take them out to be less over the top. Here’s what the edit should say:
I’d argue this entire exercise is an indictment of Eliezer’s approach to Friendly AI. The notion of a formal, rigorous success of “Friendliness theory” coming BEFORE the Singularity is astronomically improbable.
What a Friendly Singularity will actually look like is an AGI researcher or researchers forging ahead at an insane risk to themselves and the rest of humanity, and then somehow managing the improbable task of not annihilating humanity through intensive, inherently faulty, safety engineering, before later eventually realizing a formal solution to Friendliness theory post-Singularity. And of course it goes without saying that the odds are heavily against any such safety mechanisms succeeding, let alone the odds that they will ever even be attempted.
Suffice to say a world in which we are successfully prepared to implement Friendly AI is unimaginable at this point.
And just to give some indication of where I’m coming from, I would say that this conclusion follows pretty directly if you buy Eliezer’s arguments in the sequences and elsewhere about locality and hard-takeoff, combined with his arguments that FAI is much harder than AGI. (see e.g. here)
Of course I have to wonder, is Eliezer holding out to try to “do the impossible” in some pipe dream FAI scenario like the OP imagines, or does he agree with this argument but still thinks he’s somehow working in the best way possible to support this more realistic scenario when it comes up?
You’re living lumenously :)