When I use the word morality, then I certainly don’t mean any rules of conduct.
What is your defintion of human morality?
When I use the word morality, then I certainly don’t mean any rules of conduct.
What is your defintion of human morality?
So we presume that all members of SIAI want to live forever? Maybe someone enjoys sex more than longevity.
I believe in an absolute moral system as much as I believe in the rules of mathematics and other ideas. We can debate whether ideas (or the physical reality for that matter) exist in the absence of a mind, but I guess that is not the point.
As long as we have values, desires, dislikes and make judgements (which all of us do and which maybe is a defining characteristic of the human being beyond the biological basics) and if we want to put these values into a logical consistent system, we have an absolute moral system.
So if I stop having any desires and stop making any judgements, then I may still believe in a moral system, as much as an agnostic won’t deny the existence of God, but it would be totally irrelevant to me.
No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.
There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.
I also have to correct my position: CEV is not absolute morality. Volition is rather a “reptor” or “sensor” of morality I made a conceptual mistake.
Can you formulate your thoughts value-free, that is without words like “profoundly stupid”, “important”. Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.
ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.
Again, it’s not that I don’t care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don’t feel arbitrary to me—after all, I care about them a great deal! -- but I didn’t choose to care about them. I just do.
And you believe that other minds have different core believs?
Sure, and those are the claims I take the time to evaluate and debunk.
I think we should close the discussion and take some time thinking.
Please explain the relationship between G701-702 and G698-700.
“chance is low” or “chance is high” are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And “surviving is good” is not descriptive, it is normative because good is a value. you can also say instead: “you should survive”, which is a normative rule.
Thanks for the rephrasing. I would amend:
Weak scientific reductionist:
c) concepts and theories in chemistry and biology are only useful high level approximations to physical models of the universe. They could be reduced to physical theories if b) does not apply.
Since I’m Pavitra, it doesn’t really matter to me if G101 has a point; I care about it anyway.
So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.
Don’t you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so… hm… I still won’t)
Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.
A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?
Not natively, no. That’s why it requires advocacy.
You care about things. I assume you care about your health. In that case, you don’t want to be in a crash. So you’ll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.
Again, why is important to advocate anything? -- Because you care about it. -- So what?
I like gensyms.
G101: Pavitra (me) cares about something.
What is the point in caring for G101?
At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.
What if you can’t predict?
I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.
That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.
Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?
So your point is there is no point in caring for anything. Do you call yourself a nihilist?
I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.
Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are “sane”.
Suppose you’re getting into a car, and you’re wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.
I would say, the optimistic view is saying “There is probably/hopefully no crash”. But don’t let us fight over words.
You’re constructing a universal CEV. It’s not an already-existing ontologically fundamental entity. It’s not a thing that actually exists.
Does CEV of humankind exists?
What do you think of Eliezer’s idea of “coherent extrapolated volition of humankind” and his position that FAI should optimise it?
If you read a physics or chemistry textbook, then you’ll find a lot of words and only a few equations, whereas a mathematics textbook has much more equations and the words in the book are to explain the equations, whereas the words in a physics book are not only explaining the equations but the issues that the equations are explaining.
However, I haven’t fully thought about reductionism, so do you have any recommendations that I want to read?
My current two objections:
1. Computational
According to our current physical theories, it is impossible to predict the behaviour of any system larger than a dozen atoms, see Walter Kohn’s Nobel Lecture. We could eventually have a completely new theory, but that would be an optimistic hope.
2. Ontological
Physical objects have other qualities than mathematical objects. And values have other qualities than physical objects. Further elaboration needed.
It shouldn’t, because this is a straw man, not the argument that leads us to conclude that there isn’t a single absolute morality.
It is not a straw man, because I am not attacking any position. I think I was misunderstood, as I said.
In that sense, everything could be a mathematical object, including qualia. We just haven’t identified it.
Also, the concept of actual-but-still-unknown-X and previously-hypothesized-X can be applied to morality in terms of actual-but-still-unknown-norm and previously-hypothesized-norm.
Sorry, I am developing my ideas in the process of the discussion and I probably have amended and changed my position several times thanks to the debate with the LW community. The biggest problem is that I haven’t defined a clear set of vocabulary (because I haven’t had a clear position yet), so there is a lot of ambiguity and misunderstanding which is solely my fault.
Here is a short summary of my current positions. They may not result in a coherent system. I’m working on that.
1. Value system / morality is science
Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn’t understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics. He/she would understand the Coulomb’s law and Newton’s law of gravitation, but a physicist needs more than these mathematical equations for the understanding of physics.
That is the difference between physicists and chemists on one side and mathematicians and string theorists (I have not the slightest idea about string theory, so regard this part as my modest attempt of humour) on the other side.
Analogously, you need to understand the value system to understand and possibly predict the actions of value agents (humans, animals, maybe AIs). Maybe the value system can be mathematicised, or not.
But it would be a scientific explanation. I agree with you.
2. Something matters to me
We all have values. You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?
You may answer that thinking of these problems makes you biologically fitter and if you don’t ask these questions, your kind will die out and those questions won’t be asked.
But when you asked the question, you did not consider your biological fitness. And if you considered your biological fitness, then why does biological fitness matters to you? There is at least one thing that matters to you (assuming you are not a p-zombie), so at least the desire, “something matters to me”, is real, as real as your knowledge of the world.
Assuming you are not a psychopath, your only desire is not your own survival, but, being empathetic, also the well-being of your fellow animals, human and sentient beings. And you know that your fellow human beings are empathetic (or acting as if they are empathetic) as well. Ergo you can establish an intersubjective consensus and some common ground what the good is.
3. Epistomology
Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes. You may read all the books on neurobiology, but you may learn more on desires by reading a single book by Nabokov. (You may think that you don’t care, then please go back to point 2.). From here continue with the text diagramm.
ps The computer that you need to model the quantum states of a brain would be bigger than the universe, see (Kohn’s Nobel Lecture)[http://nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf].
Are you asking me to use a certain LW-inside vocabulary? In that case, a dictionary would be helpful. Which specific word or phrase is not clear to you?
Or are you holding a logical positivist position that some words or context does not have any meaning at all?
You walk up to the fridge, get out a banana and eat it.
If I am Laplace’s demon, I might be able to predict your doing (or not). But science does not explain what hunger and desire is, it can describe it using its own language, but the scientific language does not include any words to describe values. Hunger and desire have more qualities than just neuronal processes.
Anyway, the difference might be pointless, because Laplace’s demon does not exist and we can’t predict in principle anything more complicated than a dozen atoms, unless we have a fundamentally new theory of physics. In that case, the only thing we have left is normative/value theories that help us to predict someone’s action.
I did read the article “No Universal Argument” you linked to and couldn’t find any convincing rebuttal to my arguments.
I just read “Making Beliefs Pay Rent” and if I got it right, then it says that science is good (and absolute) because it can predict things while normative theories don’t. That is a good point.
My belief in an absolute morality gives me the foundation to enquire moral problems. I’ll try to figure out what the “absoulute good” is and try to life my life according to that.
We can predict and explain the “decision-making” of inanimate objects using scientific theories. We can understand the decision-making of moral agents (humans) using normative theories (we might be able to predict their actions using scientific theories, but we won’t understand or *explain” it without normative theories).
What about alien intelligence? If we can establish an intersubjective consensus with them and we realise that they have a value system that we can understand, then we can use our own system of normative theories to understand and explain their “decision-making”.
If we can’t establish an intersubjective consensus with them, then we might be able to predict their actions using scientific theories, but we won’t be able to understand their “motives”. They would act according to an absolute AI-morality, to which we have no access lacking the intersubjective consensus with them.
To recap and rectify my argument: Intersubjective consensus of the physical world leads us to believe in an absolute physical reality. Intersubjective consensus of the moral/value world leads us to believe in an absolute morality. No intersubjective consensus—no belief in absolute whatever.
Maybe, and I believe, the moral world is an emergent property of the physical world. Thus, we might be able to use physical theories to predict the actions of moral agents within the physical world, but we won’t be able to fully understand it only using physical theories since these don’t capture the emergent properties (values, desires, dislikes, et cetera).
Therefore, morality is not as absolute as reality, but it is analogously absolute. (That is/might be a correction to my current position.)
So, if alien intelligence has a value system that we can understand, then we live within the same absolute morality. If alien intelligence acts based on some other emergent properties we cannot understand, then well, bad luck. (Another additon to my current position, thanks to this discussion.)
I can’t explain you to you. Point at your feet and say aloud, “You are here.”
That’s unfortunate, I thought you saw where I was coming from.
What I meant to say is “morality is absolute as reality.” I hope that clears everything.
Given that I experience God or anything supernatural empirically and I can reasonably exclude that I am suffering from hallucinations, then it is more probable for me to believe that the phenomena was supernatural rather than an improbable quantum mechanical phenonemon. Maybe what I call God is actually Frud. Maybe God “is a tuna sandwich I once made that had a special property, it created the universe, past and future.” I don’t expect to realise all of God’s properties from a single experience.
Predictive power is not always required. Historians have quite a problem predicting things based on what they read on Caesar. You can’t thus say that there are no historical facts (fact as factual as in “objective” news reporting).
You point out a context that does not require predictive power, but you have not shown that this context is equivalent to testing for God’s existence empirically. Without a common context, your example is irrelevant to the issue.
I don’t get you. What is your understanding of “testing for God’s existence empirically?”
A sufficiently intelligent mind might deduce “Draq believes that the absolute morality is X”, but not “the absolute morality is X”.
Would you still agree with the argument if you substitute “morality” with “reality”?
As I repeatedly said, morality is as absolute or relative as reality. So if you don’t believe in an absolute reality either, then I can’t convince you, nor do I want to, since relativism/nihilism is a perfectly attainable position.
I just think that it is very arbitrary to say one exist and the other one is made up.
And it is not the way how we everyday life is. We live in a world where we subconsciously accept the world around us as (absolute) real, and we live in a world where we subcounsciously accept values as (absolute) real. If we value something, say “Pancakes are tasty/desirable”, then we automatically think “It matters, what we like”, which itself is a value.
Even if “it matters” is the only “moral” or mental perception we accept as absolute, then there is an absolute system.
“Something matters” cannot be explained descriptively (it does not have a meaning in physical terms), but has to be referred to within the value system. Therefore, the value system is self-referring and you cannot reduce it to sensory perception or scientific explanations.
Since we perceive both values and physical phenomena, I wonder why we regard one as absolute and the other one as relative.
Ah, I see where you’re coming from.
By the way, where am I coming from?
Using that defintion, morality isn’t as absolute as physical reality.
Again, as I said, under your definition of absolute, which is that reality is absolute, I agree with your disapproval of my belief in absolute morality since morality is of a different quality than reality.
Our physical reality appears to be the common context that everything shares within our universe.
Your definition of absolute is plausible, but I do not share it. I think that mental phenomena exist independently from the physical world.
What makes me believe it? If I believe that mental phenomena vanish without the natural world, I could equally believe that the natural phenomena vanish without my mind (or “mental world”). To believe that one provides the context for the other is, I believe, an arbitrary choice. Therefore, I believe in their independent existence.
Concerning God. For many people, the God hypothesis is more than just to believe that the universe is created by some distant creator who does nothing else. God also intervenes into the world. So it is possible to test God’s existence empirically. And for many Christians, this is apparently happening. Spend enough time with them, and they will tell you fantastic stories.
Personally, I don’t believe in God.
What about the Baby-Eaters and the Super Happy People in the story Three Worlds Collide? Do they have anything you would call “humaneness”?