If you confuse dishonesty with confusion, you’ll perceive a lot of ill-will that isn’t really there.
TimFreeman
Seems like a good place for the experiment I described earlier. What would you do differently if God spoke to you and said:
I quit. From now on, the materialists are right, your mind is in your brain, there is no soul, no afterlife, no reincarnation, no heaven, and no hell. If your brain is destroyed before you can copy the information out, you’re gone.
The process of vitrifying the head makes the rest of the body unsuitable for organ donations. If the organs are extracted first, then the large resulting leaks in the circulatory system make perfusing the brain difficult. If the organs are extracted after the brain is properly perfused, they’ve been perfused too, and with the wrong substances for the purposes of organ donation.
Do rational people have sex?
There are a bunch of awesome sexual things one might try. However, even if we had a list of such things, I’m not sure how to navigate around the emotional pitfalls of organizing a group of people to learn them.
In my experience, when my sex life started working I immediately lost interest in dancing, making music, making art, and learning martial arts. I was somewhat surprised to discover that all those things were for me, apparently, part of attracting a lover rather than something worthwhile in themselves. Certainly now that I’ve been married 20 years I’d much rather invest effort in improving my sex life than in doing any of those things.
Kriorus might be worth a try.
Be aware that some jurisdictions, such as British Columbia and France, go out of their way to outlaw it.
The proposition “higher mathematics is useful” can be communicated to people with negligible mathematical training, along with specifics and supporting evidence. Higher math is required to describe the physics that can figure out from first principles how chemistry should work, and somewhat lower higher math can figure out the area under curves and so forth.
In particular, a person who knows no math can observe that people who know higher math are required in order to do chemistry simulations, for example.
Is there a similar easy way to make a claim that enlightenment is useful that is testable by unenlightened people?
(For the record, I’m inclined to believe you, but it would be comforting to have a concrete argument for it.)
Well, one story is that humans and brains are irrational, and then you don’t need a utility function or any other specific description of how it works. Just figure out what’s really there and model it.
The other story is that we’re hoping to make a Friendly AI that might make rational decisions to help people get what they want in some sense. The only way I can see to do that is to model people as though they actually want something, which seems to imply having a utility function that says what they want more and what they want less. Yes, it’s not true, people aren’t that rational, but if a FAI or anyone else is going to help you get what you want, it has to model you as wanting something (and as making mistakes when you don’t behave as though you want something).
So it comes down to this question: If I model you as using some parallel decision theory, and I want to help you get what you want, how do I extract “what you want” from the model without first somehow converting that model to one that has a utility function?
An alternative to CEV is CV, that is, leave out the extrapolation.
You have a bunch of non-extrapolated people now, and I don’t see why we should think their extrapolated desires are morally superior to their present desires. Giving them their extrapolated desires instead of their current desires puts you into conflict with the non-extrapolated version of them, and I’m not sure what worthwhile thing you’re going to get in exchange for that.
Nobody has lived 1000 years yet; maybe extrapolating human desires out to 1000 years gives something that a normal human would say is a symptom of having mental bugs when the brain is used outside the domain for which it was tested, rather than something you’d want an AI to enact. The AI isn’t going to know what’s a bug and what’s a feature.
There’s also a cause-effect cycle with it. My future desires depend on my future experiences, which depend on my interaction with the CEV AI if one is deployed, so the CEV AI’s behavior depends on its estimate of my future desires, which I suppose depends on its estimate of my future experiences, which in turn depends on its estimate of its future behavior. The straightforward way of estimating that has a cycle, and I don’t see why the cycle would converge.
The example in the CEV paper about Fred wanting to murder Steve is better dealt with by acknowledging that Steve wants to live now, IMO, rather than hoping that an extrapolated version of Fred wouldn’t want to commit murder.
ETA: Alternatives include my Respectful AI paper, and Bill Hibbard’s approach. IMO your list of alternatives should include alternatives you disagree with, along with statements about why. Maybe some of the bad solutions have good ideas that are reusable, and maybe pointers to known-bad ideas will save people from writing up another instance of an idea already known to be bad.
IMO, if SIAI really wants the problem to be solved, SIAI should publish a taxonomy of known-bad FAI solutions, along with what’s wrong with them. I am not aware that they have done that. Can anyone point me to such a document?
Is there any reason to believe that the Persistent Problems Group would do better at making sense of the literature than people who write survey papers? There are lots of survey papers published on various topics in the same journals that publish the original research, so if those are good enough we don’t need yet another level of review to try to make sense of things.
For example, maybe you could chill the body rapidly to organ-donation temperatures, garrote the neck,..
It’s worse than I said, by the way. If the patient is donating kidneys and is brain dead, the cryonics people want the suspension to happen as soon as possible to minimize further brain damage. The organ donation people want the organ donation to happen when the surgical team and recipient are ready, so there will be conflict over the schedule.
In any case, the fraction of organ donors is small, and the fraction of cryonics cases is much smaller, and the two groups do not have a history of working with each other. Thus even if the procedure is technically possible, I don’t know of an individual who would be interested in developing the hybrid procedure. There’s lots of other stuff that is more important to everyone involved.
I have a fear that becoming skilled at bullshitting others will increase my ability to bullshit myself. This is based on my informal observation that the people who bullshit me tend to be a bit confused even when manipulating me isn’t their immediate goal.
However, I do find that being able to authoritatively blame someone else who is using a well-known rhetorical technique for doing that is very useful, and therefore I have found reading “Art of Controversy” to be very useful. The obviously useful skill is to be able to recognize each rhetorical technique and be able to find a suitable retort in real time; the default retort is to name the rhetorical technique.
You said:
Causal decision theorists don’t self-modify to timeless decision theorists. If you get the decision theory wrong, you can’t rely on it repairing itself.
but you also said:
...if you build an AI that two-boxes on Newcomb’s Problem, it will self-modify to one-box on Newcomb’s Problem, if the AI considers in advance that it might face such a situation.
I can envision several possibilities:
Perhaps you changed your mind and presently disagree with one of the above two statements.
Perhaps you didn’t mean a causal AI in the second quote. In that case I have no idea what you meant.
Perhaps Newcomb’s problem is the wrong example, and there’s some other example motivating TDT that a self-modifying causal agent would deal with incorrectly.
Perhaps you have a model of causal decision theory that makes self-modification impossible in principle. That would make your first statement above true, in a useless sort of way, so I hope you didn’t mean that.
Would you like to clarify?
I can’t have “incorrect” goals or emotions
You can have goals that presuppose false beliefs. If I want to get to Heaven, and in fact there is no such place, my goal of getting to Heaven at least closely resembles an “incorrect goal”.
This raises an interesting question—if a Friendly AI or altruistic human wants to help me, and I want to go to Heaven, and the helper does not believe in Heaven, what should it do? So far as I can tell, it should help me get what I would want if I had what the helper considers to be true beliefs.
In a more mundane context, if I want to go north to get groceries, and the only grocery store is to the south, you aren’t helping me by driving me north. If getting groceries is a concern that overrides others, and you can’t communicate with me, you should drive me south to the grocery store even if I claim to want to go north. (If we can exchange evidence about the location of the grocery store, or if I value having true knowledge of what you find if you drive north, things are more complicated, but let’s assume for the purposes of argument that neither of those hold.)
This leads to the practical experiment of asking religious people what they would do differently if their God spoke to them and said “I quit. From now on, the materialists are right, your mind is in your brain, there is no soul, no afterlife, no reincarnation, no heaven, and no hell. If your brain is destroyed before you can copy the information out, you’re gone.” If a religious person says they’d do something ridiculous if God quit, we have a problem when implementing an FAI, since the FAI would either believe in Heaven or be inclined to help religious people do something ridiculous.
So far, I’ve had one Jehovah’s Witness say he couldn’t imagine imagine God quitting. Everyone else said they wouldn’t do much different if God quit.
If you do this experiment, please report back.
It would be a problem if there are many religious people who would apparently want to commit suicide if their God quit, the FAI convinces itself that there is no God, so it helpfully goes and kills them.
- 12 May 2011 21:43 UTC; 17 points) 's comment on The elephant in the room, AMA by (
See Alcor’s stock answer to this. The Arrhenius equation is mentioned.
Do you think that people use the downvote to tell another user that they are a terrible person… or do they simply use it to express disagreement with a statement?
There’s another possibility. I downvote when I felt that reading the post was a waste of my time and I also believe it wasted most other people’s time.
(This isn’t a veiled statement about Roland. I do not recall voting on any of his posts before.)
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
(ETA: The point being that I agree with the parent and grandparent posts that people who won’t rationally discuss morality are often afraid of things like this. I’m just wondering whether the belief underlying that fear is true or false.)
I have experienced consequences of donating blood too often.The blood donation places check your hemoglobin, but I have experienced iron deficiency symptoms when my hemoglobin was normal and my serum ferritin was low. The symptoms were twitchy legs when I was trying to sleep and insomnia, and iron deficiency was confirmed with a ferritin test. The iron deficiency symptoms went away and ferritin went back to normal when I took iron supplements and stopped donating blood, and I stopped the iron supplements after the normal ferritin test.
The blood donation places will encourage you to donate every 2 months, and according to a research paper I found when I was having this problem essentially everyone will have low serum ferritin if they do that for two years.
I have no reason to disagree with the OP’s recommendation of donating blood every year or two.
Before my rejection of faith, I was plagued by a feeling of impending doom.
I was a happy atheist until I learned about the Friendly AI problem and estimated the likely outcome. I am now plagued by a feeling of impending doom.
The Tipler/Obama/aether connection seemed bizarre enough that I looked it up:
http://pajamasmedia.com/blog/obama-vs-einstein/
Some quotes:
Einstein’s general relativity is just a special case of Newtonian gravity theory incorporating the ether
Hamilton-Jacobi theory is deterministic, hence quantum mechanics is equally deterministic
There was absolutely nothing revolutionary about twentieth century physics.
I agree on the “random” part.
You’re spending after-tax money if you buy the flight yourself, or before-tax if you donate to SIAI, assuming they’re 501(c)3. If you trust them to honor a targeted donation (I would), it’s better to donate.