Mati_Roy(Mati Roy)
Another example of these types of questions: “If a man who cannot count finds a four-leaf clover, is he lucky?” (Stanisław Jerzy Lec)
What plans could a prospective cryonicist try out, beyond simply signing up, that could increase the odds of eventually having a pleasant re-animation experience?
If one has Alzheimer, then one could sign up to a cryonics organisation (if not done yet) and then go to the nearest hospital to this organisation, call the organisation to tell them they will die, call the hospital telling them where they will commit suicide, and then commit suicide (by minimizing the damages to his/her brain). This will stop the Alzheimer to deteriorate their memory further and therefore having a more pleasant re-animation experience.
Note that I am not saying this is necessarily the best thing to do because I don’t know what are the chances that cryonics will work, I am just saying that it would probably be a more pleasant re-animation experience.
My two favorite movies deal with trying to figure out what’s real and what isn’t, which is an important skill as a rationalist.
The two movies are: Total Recall (distinguishing reality from false memories) Inception (distinguishing reality from dreams)
If someone knows other movies that would go in this category, I’d like to know!
You’re right. So I would add one step. Go to a state (or a country) where the laws don’t require autopsy.
For example, you can sign a “Religious Objection to Autopsy” in these five states: California, Maryland, New Jersey, New York, and Ohio (http://www.alcor.org/Library/html/certificateofreligiousbelief.html).
And of these five states, I’d chose California since it’s near Arrizona where there’s a cryonics organisation (ie. Alcor).
EDIT: or Ohio, since it’s near Michigan where there’s also a cryonics organisation (ie. Cryonics Institute).
My name is Mathieu. One of my friend recommended me to read the main sequences a couple of months ago. I’ve read one third of them so far and I really like them. Now I want to get more involved in the LessWrong community than just reading the main sequences. I’ve just posted my first article. It’s about a cryonics presentation I will do on Monday.
I wish there was a class about rationality at the beginning of high school (I’d remove any courses to add one about rationality). Otherwise we keep learning things without knowing how our brain works (especially the bias it makes) and this can cause problems when learning things and making decisions.
I study engineering physics at Laval University, Quebec, Canada. I’ve worked at the robotic laboratory of my university and I really liked it. I like mathematics, logic and programming, but I hadn’t really consider working in artificial intelligence before starting reading about it here (and then other places) because I wasn’t exposed to the field. Next year I will (try to) do a master in artificial intelligence. Moreover, I would eventually like to do (at least) an internship at MIRI.
By the way, the last question in the FAQ is linking to the 5th welcome thread and not the 6th; I’m not sure where I should mention this, but maybe one of you does.
Thanks a lot to both of you!
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe?
I’ve included our potential simulators in this.
What is the probability that any of humankind’s revealed religions is more or less correct?
I’ve included religions such as venturist.
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?
I’ve put the answer includying and excludying the use of cryonics.
I estimate that 90% of people will have deffect.
I wouldn’t mind if my survey wasn’t anonymous.
I want that my highest metawanting be this sentence.
This is my highest order of metawanting.
It was determine by me wanting it (so it wasn’t already made).
I’m joking. I don’t really want to want that my highest metawanting be wanting that my highest metawanting be wanting that my highest metawanting be wanting that.… haaaaaaaaaaaaaaaaaaaaa. :-)
If you know sign language, instead of interrupting the person you can quickly comment something they says (eg. I have a question OR I disagree OR I understand) without interrupting them. Then they can either let you talk, or finish saying what they are saying (if they judge it’s better to do so) but at least they’ll know what your reply will be about.
This is not an observation, but a proposition. I’m still not good at sign language.
In fact, I’ve seen it done with the “wait gesture” (by people who don’t know a sign language per se), and it seemed to work well. So I hypothesise that it could work even better if someone knew more signs. But I have yet to put this to test.
7-80
What’s “ITS”? (Google ‘only’ hits for “it’s”) How much more expensive is it? Is it offer by Alcor and CI?
In summary: emergence is sometimes an observation but never an explanation.
Oh ok. Thank you.
Another interesting (and sad) example: during the conversation between Deepak Chopra and Richard Dawkins here, Deepak Chopra used the words “quantum leap” as an “explanation” for the origin of language, the origin of life, jumps in the fissile record, etc.
Edit: Finally he claimed it was a metaphor.
Another example: during the conversation between Deepak Chopra and Richard Dawkins, Deepak Chopra thinks that our lack of a very good understanding for the origin of language or jumps in the fissile record for example means that an actual discontinuity happened.
P(Aliens in observable universe): 74.3 + 32.7 (60, 90, 99) [n = 1496] P(Aliens in Milky Way): 44.9 + 38.2 (5, 40, 85) [n = 1482]
There are (very probably around) 1.7x10^11 galaxies in the observable universe. So I don’t understand how can P(Aliens in Milky Way) be so closed to P(Aliens in observable universe)? If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586. I know there are other factors that influence these numbers, but still, even if there’s a only a very slight chance for P(Aliens in Milky Way), then P(Aliens in observable universe) should be almost certain. There are possible rational justifications for the results of this survey, but I think (0.95) most people were victim of a cognitive bias. Scope insensitivity maybe? because 1.7*10^11 galaxies is too big to imagine. What do you think?
Tendency to cooperate on the prisoner’s dilemma was most highly correlated with items in the general leftist political cluster.
I wonder how many people cooperated only (or in part) because they knew the results would be correlated with their (political) views, and they wanted their “tribe”/community/group/etc. to look good. Maybe next year we could say that this result won’t be compared to the other? So if less people cooperate, then it will indicate that maybe some people cooperate for their ‘group’ to look good. But if these people know that I/we want to compare the results we this year in order to verify this hypothesis, they will continue to cooperate. To avoid most of these, we should compare only the people that will have filled the survey for the first time next year. What do you think?
I ended up deleting 40 answers that suggested there were less than ten million or more than eight billion Europeans, on the grounds that people probably weren’t really that far off so it was probably some kind of data entry error, and correcting everyone who entered a reasonable answer in individuals to answer in millions as the question asked.
I think you shouldn’t have corrected anything. When I assign a probability to the correctness of my answer, I included a percentage for having misread the question or made a data entry error.
This year’s results suggest that was no fluke and that we haven’t even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!
Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we’re improving.
I’m looking for a partner to read, study and do the exercises of the manual “Machine Learning in Action” by Peter Harrington in approximately 8 weeks.
I agree. But I don’t think they can be that strongly dependant (not even close). How could they be?
Would you consider “black energy” as a fake explanation for the expansion of the universe?