Keep in mind that Alcor faces the following challenges: (1) Concern about lawsuits harming their future financial stability. (This is not a threat to patients already in cryo-suspension since funding sustaining them is in a separate legal entity from Alcor.) (2) They often have to make very fast decisions about what to do when a patient legally dies. (3) Sometimes family members go against the wishes of the patient. Alcor tries to structure everything to minimize the harm of these three challenges. Note, I’ve been an Alcor member for more than 15 years.
An agreement where a patient’s wife has to approve what happens is perilous because approval given by telephone might not give Alcor legal protection if the wife later says she didn’t agree so Alcor (I’m guessing) would want the wife to submit a notarized agreement, but such a submission would take time and so makes a full recovery less likely. Alcor has to consider the possibility that your spouse is filled with secret disgust at your wanting to be cryopreserved and would love to sue Alcor.
Excellent as always. What do people think about the probability of colleges going back to Zoom in late January / early February?
Both your answers are right. Another reason is that evolution has been operating on the basic human genotype for a very long time and has likely found most of the one gene beneficial mutations and has had time to spread them. But because of copying error evolution is always able to “find” new harmful mutations.
What do employers look for in a college student who wants to work on AI safety (please be blunt)? What is the typical starting salary and average number of hours per week worked for people coming right out of college? How are female applicants viewed? (I write this as a professor at a women’s college who incorporates AI safety into one of his econ classes.)
Chapter 5 of A Course In Game Theory. Although you might already know the material.
Consider learning game theory.
As someone who teaches undergraduates a bit about AI Safety/alignment in my economics of future technology course at Smith College I much prefer “AI safety”as the term is far clearer to people unfamiliar with the issues.
Would the clones be at least as smart as the original? On the yes side we have the Flynn Effect, which is the steady increase over time of IQ scores. If this represents a real increase in intelligence (as opposed to just measurement error) this means we would expect a modern clone of someone born in 1903 to be smarter than the original perhaps because a reduced pathogenic load is making nearly everyone more intelligent.
On the no side, the more exceptional someone is, the more regression to the mean we should expect from the clones. If your intelligence is some combination of genes and luck then the smartest person ever should have had an enormous amount of luck (say in terms of how random noise influenced brain development) as well as exceptional genes. The clones, on average, will have much less brain development luck than the original did.
Thanks for providing this, it seems extremely important for trying to predict how the pandemic will play out. So we should have been much more scared of COVID variants than we would be absent original antigentic sin. As it was predictable that COVID, because it was new to humans, would mutate this means those, including myself, who had never heard of this force likely underestimated the expected harm of COVID. Why wasn’t this something that the official experts were talking about long ago?
Me at age 25 (who didn’t know he was autistic) “I will say the emperor is naked. Other people will like me more after I have said the emperor is naked. That girl who I asked out yesterday and who said, ‘I’m busy maybe some other time’ might now agree to go on a date with me. I believe other people will like me more because I model other peoples’ thinking on my own and I would have greater respect for someone else who says that the emperor is naked.”
Me at age 54 (who does know he is autistic). “I really, really want to say the emperor is naked. I get this will cause most other people to think less of me. I emotionally believe that I should not care about anyone who would think less of me for saying the emperor is naked, but I intellectually know this isn’t true. I’m also aware that most other people would have some natural trepidation against saying the emperor is naked that I, being very weird, have inverted. This inversion can cause me to fail at social signaling games and hinder progress towards my goals. But I so very much want to say he is naked that I’m going to do it unless I can convince myself that the costs of doing so are very high and being a tenured professor means I probably won’t suffer too much by being honest in this case, and I have succeeded in having a few friends who would not abandon me for saying the emperor is naked. Indeed one such friend has a blog post up saying that the emperor is not only naked but also mentally defective”.
The Emperor’s New Clothes should be taught to autistic children who have IQs above, say, 90 with the lesson that “normal” people sometimes realize that the Emperor is naked, and sometimes come to truly believe he is clothed, but “normal” people almost always get very mad at anyone who correctly points out that the Emperor is naked. Being autistic can give you the superpower of caring more about truth than social acceptability. Use your power, but understand its personal cost.
On a personal note, being autistic is likely why I had the “courage” to be one of the three at my college to speak on the record with a New York Times reporter about political correctness at my workplace. A discussion with the reporter starts at 5.10 on this podcast, and this is the NYT article.
I was going to suggest you try to reach EA people, but they might want to achieve AGI as quickly as possible since a friendly AGI would likely quickly improve the world. While the pool is very small, I have noticed a strong overlap between people worried about unfriendly AGI people who have signed up for cryonics or who at least who think cryonics is a reasonable choice. It might be worth doing a survey of computer programmers who have thought about AGI to see which traits correlate with being worried about unaligned AGI.
From a selfish viewpoint, younger people should want AGI development to go slower than older people do since, cryonics aside, the older you are the more likely you will die before an AGI has the ability to cure aging.
Yes, although you want to be very careful not to attract people to the field of AGI who don’t end up working on alignment but end up shortening the time to when we get super-human AGI.
A human made post-singularity AI would surpass the intellectual capabilities of ETs maybe 30 seconds after it did ours.
My guess is that aliens have either solved the alignment issue and are post-singularity themselves, or will stop us from having a singularity. I think any civilization capable of building spaceships will have explored AI, but I could just lack the imagination to consider otherwise.
Normally this is a good approach, but a problem with the UFOs are aliens theory is that there is a massive amount of evidence (much undoubtedly crap) the most important of which is likely classified top secret so you have to put a lot of weight on what other people (especially those with direct access to those with top secret security clearances) say they believe.
I think if UFOs are aliens they on net increase our chance of survival. I mostly think Eliezer is right about AI risks, and if the aliens are here they clearly have the capacity to kill us but are not doing so and the aliens would likely not want us to create a paperclip maximizer. They might stop us from creating a paperclip maxmizer by killing us, but then we would be dead anyway if the aliens didn’t exist. But it’s also possible that the aliens will save us by preventing us from creating a paperclip maximizer.
It’s extremely weird that atomic weapons have not been used in anger since WW II, and we know that humanity got lucky on several occasions, UFOs seems to like to be around ships that have nuclear weapons and power so I assign some non-trivial probability to aliens having saved us from nuclear war.
As to the probability assessment, this is my first attempt so don’t put a lot of weight on it: If no aliens 75% (my guess, I don’t know Eliezer’s) chance we destroy ourselves. UFOs being aliens at 40%, and say 30% chance if this is true they would save us from killing ourselves and 3% chance they would choose to destroy us in a situation in which we wouldn’t do it to ourselves.
Yes. I don’t know you so please don’t read this as an insult. But if Sam Altman and Tyler Cowen take an idea seriously don’t you have to as well. Remember that disagreement is disrespect so you saying that UFOs should not be taken seriously is your saying that you have a better reasoning process than either of those two men.
Sam Altman seems to take UFOs seriously. See 17.14 of this talk.
In the now deleted discussion about Sam Altman’s talk to the SSC Online Meetup there was strong disagreement about what Sam Altman might have said about UFOs. If you go to 17.14 of this discussion that Altman had with Tyler Cowen you hear Altman briefly ask Cowen about UFOs. Altman says that “[UFOs] gotten a lot and simultaneously not nearly enough attention.”