There may also be a limit to how wisely one can argue that spending money on wars while cutting taxes for the wealthy is sound economic policy.
Does any viewpoint have a right to survive in spite of being wrong?
There may also be a limit to how wisely one can argue that spending money on wars while cutting taxes for the wealthy is sound economic policy.
Does any viewpoint have a right to survive in spite of being wrong?
I don’t see a terrible problem with comments being “a discussion about the facts of the post”; that’s the point of comments, isn’t it?
Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.
Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth’s dwarves, Star Trek’s Vulcans, or GEICO’s Cavemen doesn’t seem like it would have the same world-shattering implications.
The “people” in the quoted bit are correct. This is not science; this is statistical analysis.
It is possible that an individual would be better served by this social network, though I have generally agreed that a physician who treats himself has a fool for a patient, and the more so for a layman who neglects to consult competent medical authorities. These social networks certainly cannot take the place of original research; they rely on existing observed trends.
Well, you will have to be careful how you do it; my understanding is that most doctors are exasperated at people who self-diagnose based on reading things on the Internet. It’s a bias, sure, but it doesn’t seem to be an unreasonable one. So you wouldn’t want to bring it up on your very first visit. You will need to wait until you’ve demonstrated your non-crank-ness.
Once you and your doctor know each other better, though, I think it would be an excellent idea to bring more data to the table. My objection is to an article entitled “Med Patient Social Networks Are Better Scientific Institutions”, not one entitled “Med Patient Social Networks Are A Useful Tool In Improving Care”.
According to the article, they lack crucial features such as double-blinding. Most social networks lack the openness and data retention critical for effective peer review. It is possible to learn something from a network like the one described, but I would hesitate to call it science.
Other hominids have been known to keep pets. I would not be surprised if cetaceans were capable of this as well, though it would obviously be more difficult to demonstrate.
I don’t think it’s possible to integrate core Babyeater values into our society as it is now. I also don’t think it’s possible to integrate core human values into Babyeater society. Integration could only be done by force and would necessarily cause violence to at least one of the cultures, if not both.
My journey away from theism was characterized by smaller arguments such as these. There was no great leap, just a steady stream of losing faith in doctrines I had been brought up to believe. Creationism went first. Discrimination against homosexuals went next. Shortly after that, I found it impossible to believe in the existence of hell, except perhaps in a sort of Sartrean way. Shortly after that, I found myself rejecting large portions of the Bible, because the deity depicted therein did not live up to my moral standards. At that point I was finally ready to examine the evidence for God’s existence, and find it wanting.
I think in the end you will find that there are two things which can work. You must either point out that the beliefs lead to conclusions that are not just inconsistent, but also absurd, or you must point out that the beliefs lead to conclusions that contradict more “core” beliefs, such as “love your neighbor as yourself”.
Fred Clark is a liberal, fairly orthodox Christian. He blogs on a variety of subjects, including the birther/TeaParty movement, the deficiencies of creationism ([1] [2]), the strange phenomenon of religious hatred of homosexuals ([3] [4] [5]), and an interesting view on vampires. (He also has an entertaining ongoing series where he rips apart the popular fundie series ‘Left Behind’, and shows how the writers know nothing of their own religion, let alone how the real world works.)
You could do worse than to look at how he handles this sort of thing, from a religious perspective.
Surely it would be better in multiple ways to simply find a well-spoken religious person with whom you can work. He will have more knowledge of his audience than you have, so there’s a practical benefit, as well as the moral benefit of not being dishonest.
We’ve evolved something called “morality” that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
Proper posture tends to be more comfortable; surely this is a benefit to myself.
I also apologize to people when I have wronged them, not because they are higher-status than me, but because I do not like being a jackass.
It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don’t think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly “insisted” on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet “fairness” and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks’ concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be “Do not simulate without sufficient resources to maintain that simulation indefinitely.”
If I’m following your “logic” correctly, and if you yourself adhere to the conclusions you’ve set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there’s no such thing as continuity of identity, so you’re already dead; the guy in your body is just a guy who thinks he’s you.
I think this may safely be taken as a symptom that there is a flaw in your argument.
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
Of course, there’s always the possibility that simulations may be much more similar than we think.
All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.
As for when the harm occurs, that’s nebulous concept hanging on the meaning of ‘harm’ and ‘occurs’. In Dan Simmons’ Hyperion Cantos, there is a method of execution called the ‘Schrodinger cat box’. The convict is placed inside this box, which is then sealed. It’s a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict’s death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.
I would argue that the ‘harm’ of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I’d say ‘potential harm’ is created, which will be actualized at an unknown time. If the convict’s friends somehow rescue him from the box, this potential harm is averted, but I don’t think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.
If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.
In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.
Your second link was discussing simulating the same personality repeatedly, which I don’t think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can’t wave away my moral responsibility by claiming that in some other universe, I will act differently.
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to ‘exist’? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That’s of little comfort to me, though, if I am informed that I’m living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.
I think the thing that made me a seeker-after-rationalism is the same thing that made me an agnostic: Greg Egan’s Oceanic.
I grew up in a fundamentalist household and had had one moment of religious euphoria. Oceanic made me confront the fact that religious euphoria, like other euphoria, is just naturalistic phenomena in the brain. Still waiting on my fundamentalist parents to to show evidence for non-naturalistic causes for naturalistic phenomena.