I think unusual things have happened (things with a low probability that nevertheless occurred; things about which it might not be appropriate to say they actually had a 100% chance of occurring all along if only we knew enough). So unusual things have occurred, but nothing strange.
Hul-Gil
Thanks for this! This page will keep me busy for a while. Ethics is my favorite branch of philosophy, which is my favorite hobby (having abandoned the idea of philosophizing for money); and until this page, pondering the use of ethics in the development of Friendly AI was not on my mental radar.
What is meant by that question about “should”? If it’s a general inquiry, I have always considered it like so: If it is said you “should” do action Y, then action Y is thought to cause some outcome X, and this outcome X is thought to be desirable.
I’m an biochemistry major, so organic chemistry takes up a large part of my time. I’m still just a lowly undergrad, though, so Google will probably do as well as me. In any case, here are my thoughts:
Rancidity is caused by several mechanisms: mostly hydrolysis, various microorganisms about which I know nothing, and oxidation. I do not think butter will go rancid in liquid nitrogen, since it would need water, microorganisms, or oxygen to do so. I imagine rancidification is slowed at cryogenic temperatures, but I’m not sure if oxidation and hydrolysis would completely stop.
I don’t see why they wouldn’t be… but given the site we’re on, I’m hesitant to come right out and say yes.
I would suspect this question is not answerable with any precision at this time.
The presence of oxygen is certainly relevant to randification normally, so I would guess so; but don’t quote me on it.
This is clear and well-written, and makes sense to me. I don’t think any of it conflicts with my statement, though (if you mean to correct rather than expand upon). My original statement is just a more general version of your more detailed divisions: in each case, “should” argues for a course of action, given an objective. The objective is often implicit, and sometimes you must infer or guess it.
“You shouldn’t steal those cookies [...if you want to be moral].” More formally stated, perhaps something like: “not doing this will be morally correct; do not do it if you want to be a moral person.”
“You should do X [...if you want to have fun].” More formally: “Doing X will be fun; do it if fun is desired.”
How should I help us achieve immortality?
Heh, a good point; I would actually have replied the same way on any forum, but I thought I’d include that comment as a nod to LessWrong’s devotion to rationality. It probably helped that I had just read an article about knowing when not to venture a guess!
I’m asking specifically what field I should go into; perhaps that wasn’t clear enough.
Your suggestion is not correct if my intellectual contributions would be more valuable than any or most monetary ones I could make. But don’t think I have not considered the opposite, too; to that end, what’s the best way to make money? Economists aren’t as rich as I would have guessed.
Thank you! I will do so forthwith.
Preventing my death would be no more valuable to who? A happier entity would be an improvement for who? Certainly not for me. Preventing the death of an already-conscious being would be more valuable to said being; probably to most other people, too—is it okay to murder someone if you’re going to have a baby to replace them?
We already have a method for making new people, so preventing the deaths of those already here has higher priority, I think.
That’s an excellent way to think about it. I had not considered it like that; it makes me feel less guilty about the possibility of going into finance!
However, intellectual contributions are not necessarily purchasable. That is, donating enough money for an organization to hire two mediocre scientists may not result in the insights of one slightly better scientist; margin size may not be linear. To paraphrase a quote from an article I’ve forgotten the rest of, “a hundred years of doggy living won’t add up to one human insight.”
Not to suggest I am so excellent as to make all other scientists look like dogs, of course. Rather, I feel that the insights of any one researcher are possibly unique. The example that comes to my mind is that of vulcanized rubber—discovered by accident. Who knows when it would have come about if Goodyear had gone into law instead? Who knows but that I might stumble on the Vulcanized Rubber of Immortality? In any case, a world where everyone researched immortality would be better than a world where one person researched it with the funds of everyone else.
Science is my true love and I don’t think I shall abandon it, but finance is a little interesting. Do you suggest it just because financial careers come with high salaries, or do you think that understanding of finance means understanding how to manipulate money in order to become wealthy? I envisioned the latter for economics, but like I said, economists don’t necessarily seem to be better investors than anyone else.
Rethink that last paragraph.
1.) An objective assessment says nothing of their value and importance in the world, because value and importance are assigned by individuals for themselves.
2.) To say wanting immortality is overrating the importance of your own life is like saying that liking the color green is overrating the importance of your own artistic ability—totally nonsensical.
3.) Technological immortality is not just for oneself; I think it is probably rated so highly important because it is for everyone. Everyone must die, and any death is as tragic as any other: to think one must only care about immortality for one’s own sake is fairly cynical.
4.) Finally, immortality is almost guaranteed to be more important than any other goal one could have. Anything else can be deferred and accomplished once you are immortal, but once you have died, that’s it.
You should give this article a read-through, I think:
http://lesswrong.com/lw/1yi/the_scourge_of_perversemindedness/
Insisting upon calling your own life unimportant is not rational, but perverse.
I’m a utilitarian too; I posted arguments #1 and #2 because I don’t know how I could argue for inherent value either, and Aleph might not be utilitarian. Note, though, that happiness can be inherently valuable, yet the same event can still result in different utility (or importance) for different people: Aleph may not value his own life as much as I value mine, causing immortality to make me more happy than it would make him.
I think you are arguing against straw men, however. No one has said, as far as I’m aware, that one’s own life is necessarily uniquely important, nor was this implied. I have not seen anyone suggest that lives aren’t equivalent in importance, and neither is this required for immortality to be valuable. And what does the next generation’s rationality have to do with the fact that death prevents one from maximizing one’s own happiness? As my happiness is inherently valuable, it’s inherently valuable to me to maximize my own happiness, whether or not the next generation is happy. Surely it is better for our happiness to be added than for theirs to replace my own.
Even by your own examples, immortality is important. You make the argument that happiness would be the same if a death was mitigated by a new life—and this is true, but as an example, it is flawed: how often does that happen? In real life, death does not usually result in new life, so utility is increased dramatically by defeating death. But even if we assume that every death is balanced by a new life, total utility would be most increased if there was new life without the corresponding death.
It seems like you put aside all the drawbacks to death advanced so far—the sadness of others; the debility of age; the fact that we already know how to create new life, thus making death the main enemy of utility; the cost of replacement people; lack of happiness once dead—and wave it aside, saying “besides all those, death isn’t so bad.” Well yes—death isn’t so bad, without all the bad things about death!
I’m not sure exactly what you’re trying to say here. Your original statement was meant to point out that immorality is “not as important as people think”, right? Who is it that thinks immortality is overrated, then? Apparently not me, since we don’t appear to actually disagree that death is bad and immortality is good—which is all I’ve claimed. (We do disagree below, about death not being the end of the world, but that’s not any claim I’ve made before.)
Thank you. :D I hope to inspire everyone to feel the same way—then we’ll be immortal in no time!
Thank you—I am looking now.
I do tend to get pretty enthusiastic about technology. It might end up destroying us before a natural disaster would, but at least we have the theoretical chance to build a much better world for ourselves—a chance we wouldn’t ever have without it, I think.
Obviously, I have already asked this. Since I have stated life extension is going to be my life’s work, I probably concluded that yes I should. (And with pretty hefty “surety and importance coefficients”!)
So do you actually mean “I don’t think life extension/immortality is a good idea”?
Okay, I see what you meant by your original reply. My reply was based on the idea that you meant a total absence of technology might be preferable. If that seems like a ridiculous idea to you, well, it does to me too—and I’m confused and disgusted by the fact that I have spent a lot of time arguing against people who think that very thing: technology bad… all of it.
Well, as far as you know, it is the end of the world. Solipsism is a rationally defensible position, though I personally feel it’s only worth keeping in mind to remind one of the limits of knowledge, and is not a philosophy that should actually govern behavior. (Thus this is intended as a comment only, not a refutation).
I didn’t suggest it was about defending ideas. I only suggested that a particular idea could be defended by rationality. I don’t think that implies the sole use of rationality is to defend ideas.
Your points are irrelevant. The man asserted that his religious beliefs meant Artificial Intelligence was impossible, and that’s what the author of this post was debating about. No souls need to be tested, because the existence of souls was not contested. Nor did Eliezer say he had created an AI.
I’m also surprised no one pointed out that Mark D’s “reversal” scenario is totally wrong: if Eliezer was unable to create an AI, that does not at all imply that the man’s own assertions were true. It might, at best, be very weak evidence; there could be many reasons, other than a lack of a soul, that Eliezer might fail.