That’s very interesting. It sounds like you start digging into the problems with level 4 multiverse ethics I plan to write a series of sci-fi novels about. What I have already written (not too much) can be read with Google Wave if you add “our-ascent-noofactory@googlegroups.com″ to you contacts and then display the group waves. There are a couple of underlying concepts and questions which come together in my fictional work:
How could a possible positive post-singularity world look like?
How could sci-fi in a possible post-singularity world look like? → My answer to that question is that “post-singularity sci-fi” doesn’t make sense and the corresponding works of fictions would be about alternative possible worlds (which exist by modal realism or “classical” multiverse theories) and their hypothetical interactions.
How does modal realism multiverse ethics work, if at all?
How to portrait a civilization whichs ascends rather infinitely in the most impressive and interesting way?
How can a positive post-singularity world be reached after all?
Why do I care about the hypothetical consequences of modal realism? Because I think it’s the best ontology for modelling the world as correctly as possible, and I’m pretty convinced that it’s true (for reasons similar to those in the map that is the territory, but more founded in mathematical logic and philosophy of mathematics). Trying to apply “pure” utilitarianist reasoning to a modal realism multiverse leads to serious problems, for example:
A) The amount of joy and suffering are actually infinite, which destroys the point of summing it up or integrating it. You could fix the problem by doing “local” computations, but then how do you define the “locations” in the right way, if worlds are nested or infinite in space or time. All of this is a huge headache for a convinced hedonistic utilitarianist, which I used to be before. (I find it hard to tell what I am, in an ethical sense, now. Possibly, “confused” might be the best shortest description.)
B) Every configuration of sentience actually is realized somewhere in the modal realism omnicosmos, which I call Apeiron. From a purely abstract point of view the mental states with positive and negative valence should be in one-to-one correspondence, which means that from a purely apeironal (maximally holistic) point of view, there is a perfect balance of good and bad feelings in every respect. Interestingly, this observation seconds that theories like hedonic utilitarianism are only meaningful if applied “locally”.
C) Above of each (computable) world lies an infinite (!) chain of worlds in which the first one is simulated directly or indirectly (unless that is impossible, which I think is not the case). If you haven’t considered the problems of simulation ethics yet, this is a good reason for starting to do so.
D) Trying to define a probability measure over anything on a whole level 4 multiverse is rather hopeless. Maybe it’s possible to define some fancy something measures, but ultimately you have to face the probem of unbounded infinite cardinality.
Oh, I don’t know how so solve these problems in the most convenient way. All those questions and thoughts have left me with some kind of meta-ethical nihilism. However, I tried to invent some new meta-ethical concepts like “(meta-)ethical synergism” (quantify ethical systems and use ensembles of those systems for making moral judgements) and “thelemanomics” (extract the underlying economic, social, political and ethical systems from the volitions of all people; roughly comparable to CEV), which could fix some meta-ethical problems.
I think we should stay in contact.
What I am doing:
I registered here on LW today, because this is the first posting I thought I really should comment on. Those two questions are one of the most important and most thought provoking of all. They are even more important than the questions “What do you believe, and why do you believe it?”, because most of your believes might not be relevant for your actions at all.
I study mathematics and physics, because
1.1. I want to know what the world is and how it works.
1.1.1. Understanding the world better is generally useful, although it might lead to pretty unsettling insights.
1.1.1.1. Actually, I go on exploring the world in a mathematical/philosophical sense, because my curiosity is stronger than my fears.
1.1.2. If I understand how things work, I might be able to improve them.
1.1.2.1. I’m very fond of improving things, because perceiving suboptimality evokes negative emotional reactions in myself.
1.1.3. I do want to understand things, because understanding things is fun.
1.2. I suspect that studying could help me to earn more money.
1.2.1. Earning more money increases my capabilities at changing the world according to my values.
1.2.2. I’m pretty annoyed that I have to earn money! That fact is restricting my freedom (ok, there are alternatives to using money, but they don’t look attractive enough to me). Earning money by using maths seems relatively convenient compared to most other alternatives.
1.3. I want to write sci-fi stories and think that studying those subjects might help a bit.
1.3.1. Most sci-fi stuff is not really compatible to my transhumanist views, so writing myself seems like a good solution.
1.3.2. I’m a hobby philosopher, but bare philosophy isn’t sexy enough, so I do some kind of implicit philosophy by creating sci-fi settings and stories, in order to explain my views.
1.3.2.1. Explaining the way I think is important, because it enables real communication about interesting topics.
1.3.2.1.1. Because of a lack of such communication I sometimes feel lonely and misunderstood.
1.3.2.2. It’s important to make my views popular, because I think they are too awesome to be restricted to a single individual.
1.3.2.2.1. I think my views are awesome, because I spent a lot of time at pondering about difficult philosophical questions, and had some insights only few other people, or maybe noone else ever had.
1.3.3. At the moment I’m not sure how to resolve some very difficult ethical problems, and I try exploring possible solutions by writing stories.
1.3.3.1. I think it would be great to come up with an “optimal” ethical system, but I’ve realized that you can’t measure can’t measure ethical optimality without already having an underlying ethical system. Oh, I’m pretty disenchanted, so I’m conent with anything that “feels right”.
1.3.3.2. Classifying different ethical systems might be a worthwile goal, if there’s no single best candidate.
1.3.4. Writing can be more entertaining than less productive forms of entertainment.
Two. (I wrote this as word, because strangely, writing “2” resulted in “1″ being shown!) Unfortunately I don’t feel that I have the necessary ressources for finishing any stories, because I’m currently pretty preoccupied with doing maths. So, at the moment I do not work on them.
2.1. I feel that I have been pretty inefficient at learning maths. Somehow I think that I have to compensate for this by using more time on learning, so I can finish reasonably soon, and with feeling reasonably competent.
2.1.1. I don’t want to finish my thesis and exams as quickly as possible, because I want to have the feeling that I really understand all the stuff I am supposed to understand.
2.1.1.1. Not really having that feeling is annoying!
2.2. I’m really not sure whether that’s the best decision, but I’m affraid of getting too stressed out with trying to learn and write at the same time, while I still have a pretty full curriculum.
2.2.1. After having written 2.2 I feel stupid, because my “full curriculum” was my own decision and I could reduce it or try writing nevertheless.
Umm, actually I suspected that I could end at a conclusion like that. Perhaps that’s also the most important reason why I started this comment. I should become better at using my time efficiently. My hope is that LW could help me with that.