كافر Musician Rippler Hacker Internaute 0Spirit
themusicgod1
All the studies say that your odds are just not good enough to be worth it.
...and even if you are, people who are able to re-arrange the odds to their favour may end up crowding out the honest ones ;)
Wouldn’t he have just discarded it as he was trained with other notation?
It would be like someone in the modern english world trying to learn chinese math notation. We could probably understand the concepts so could in principle do it, but it would seem relatively unweildly, even if it turns out that it’s a much more elegant way of doing math. We’d never know.
He might very well learn it and then go “that’s cute” and then ignore it.
I would explain the concepts of the craziest, most non-obvious ideally moral ideas I’ve ever had [such as the idea that Nick Bostrom’s Interstellar Opportunity Cost paper completely changes the nature of the pro-life debate, such that it no longer is sensible to freeze all fetuses instead of aborting them, instead, if we’re serious about being pro-life we should crop humanity down to only what is needed to spread human life to other stars, and that there are economic considerations to freedom but that they are subtle and complex]. Something that is so …off the wall might not go through directly, but it might come through as something equally out there [that the greeks should dedicate all their energies and efforts to seafaring and trade]
And in fact the further in the future I would be aiming to talk about the better. What matters about 2,000 paltry years when we’re talking about post-singularity or near-singularity times? The differences between us will be minor in comparison, and not be ‘smudged’ by the chronophone.
“No results found for \” expert moral system \”.”
-google.
Remember: it doesn’t have to be perfect, just better than us.
edit google was misinformed—this has been discussed. Nevertheless the point stands—unless there’s a particular reason why we think that we would perform better than an expert system in this topic I am skeptical that acting except insofar as to create one is anything but short-term context dependent moral.
Earlier on in internet history there was a movement to make ‘tse’ a gender-neutral pronoun. It didn’t take, but I still use it.
Or alternatively you could go more recent & use the Baltimore dialect and use ‘Yo’ as a gender-neutral pronoun.
(ref: Stotko, E. and Troyer, M. “A new gender-neutral pronoun in Baltimore, Maryland: A preliminary study.” American Speech, Vol. 82. No. 3, Fall 2007, p. 262.)
This link is broken.
http://www.youtube.com/watch?v=mthDxnFXs9k looks like it might be the same video though.
A few thoughts:
Whether this is a good idea or not might very well be something you have to try for yourself to know; but on the flip side we are fairly different people, 20 years after any particular decision. While some decisions may be final, especially in an age where we can display attributes of ourself very publicly it might make more sense to have a publicly accessible Crocker’s Flag somewhere that might be unset perhaps 20 years down the line so that you don’t damn your future self to a life of shameful feelings beyond necessary.
Secondly, ‘none of your business’ is neither radically honest, and with the possible exception of the person hiding from the secret police, I’ve long maintained that since MAD/nuclear weapons became a possibility that there is no such thing as ‘none of your business’; we all have a vested interest in the emotional situation and financial incentives of those in the global village. Should a veil of secrecy exist, it may very well cover that which will undo us all.
edit 2013-me did not understand the full consequences of global surveillance. While it’s true that what’s covered by a veil of secrecy would doubtless cover the seeds of our destruction, we are all hiding from the secret police, post 2013. Proceed with caution
Sadly, your link is broken. Do you have a copy of this one?
edit : nevermind internet archive comes through.
link for the lazy
Likewise, if this was iterated 3^^^3+1 times(ie 3^^^3 plus the reader),it could easily be 50*3^^^3 (ie > 3^^^3+1) people tortured. The odds are if it’s possible for you to make this choice, unless you have reason to believe otherwise they may too, making this an implicit prisoner’s dilemma of sorts. On the other side, 3^^^3 specks could possibly crush you, and/or your local cluster of galaxies into a black hole, so there’s that to consider if you consider the life within meaningful distance of of every one of those 3^^^3 people valuable.
Your link is 404ing. Is http://spot.colorado.edu/~norcross/Comparingharms.pdf the same one?
The probability I’m the only person person selected out of 3^^^3 for such a decision p(i) is less than any reasonable estimate of how many people could be selected, imho. Let’s say well below 700dB against. The chances are much greater that some probability fo those about to be dust specked or tortured also gets this choice (p(k)). p(k)*3^^^3 > p(i) ⇒ 3^^^3 > p(i)/p(k) ⇒ true for any reasonable p(i)/p(k)
So this means that the effective number of dust particles given to each of us is going to be roughly (1-p(i))p(k)3^^^3.
I’m going to assume any amount of dust larger in mass than a few orders of magnitude above the Chandrasekhar limit (1e33 kg) is going to result in a black hole. I can even assume a significant error margin in my understanding of how black holes work, and the reuslts do not change.
The smallest dust particle is probably a single hydrogen atom(really everything resoles to hydrogen at small enough quantities, right?). 1 mol of hydrogen weighs about 1 gram. So (1-p(i))(p(k)3^^^3 (1 gram/mol)(6e-23 ‘specks’/mol) (1e-3 kg/g) (1e-33 kg/black hole) = roughly ( 3^^^3 ) (~1e-730) = roughly 3^^^3 black holes.
ie 3^(3_1^3_2^3_3^...^3_7e13 −730) = roughly 3^(3_1^3_2^3_3^...^3_7e13)
ie 3_1^3_2^3_3^...^3_7e13 − 730 = roughly 3_1^3_2^3_3^...^3_7e13.
In conclusion, I think at this level, I would choose ‘cancel’ / ‘default’ / ‘roll a dice and determine the choice randomly/not choose’ BUT would woefully update my concept of the sizee of the universe to contain enough mass to even support a reasonably infentessimal probability of some proportion of 3^^^3 specks of dust, and 3^^^3 people or at least some reasonable proportion thereof.
The question I have now is how is our model of the universe to update given this moral dillema? What is the new radius of the universe given this situation? It can’t be big enough for 3^^^3 dust specks piled on the edge of our universe outside of our light cone somewhere. Either way I think the new radius ought to be termed the “Yudkowsky Radius”.
I chose RANDOM* and feel that this
Satisfies the suggestion of making sure that you choose/‘state a preference’ (the result of RANDOM is acceptable to me and I would be willing to work past it and not dwelling on it).
Satisfies the suggestion of making sure you state assumptions to the extent you’re able to resolve them (RANDOM implies a structure upon which RANDOM acts and I was already thinking about implications of either choice, though perhaps I could have thought more clearly about the consequences of RANDOM specifically)
does not compromise me as a (wannabe) rational person (ie I use the situation to update previous beliefs)
Not to allow the alternatives to distract afterwards (as once the choice RANDOM is made, it cannot be unmade—future choices can be made RANDOM, TORTURE, SPECKS or otherwise)
Does not compromise future escape routes (RANDOM, SPECK, RANDOM, TORTURE is just as an acceptable sequence of choices to me as SPECK, TORTURE, SPECK, TORTURE—it just depends what evidence and to what extent evidence has been entangled)
but has the additional benefit of
not biasing me towards my choice very much. If SPECKS or TORTURE is chosen, it is tempting to ‘join team SPECKS’. I suppose I’ll be tempted to join team RANDOM, but since RANDOM is a team that COOPERATEs with teams SPECKS and TORTURE something GOOD will come of that anyway.
Reserving my agency, and the perception of my agency for other decisions(though they may perhaps be less important(3^^^3 dust specks is a potentially VERY IMPORTANT!!!!!!!!!!!!!!!!!!!! decision), they will be mine), such as meta-decisions on future cases involving and not involving RANDOM)
in fact let’s see if I can rephrase this post
META-TORTURE and META-SPECKS stances exist that disposition us away from TORTURE and SPECKS that are harder to express when making a decision or discussing decisions with people and that to avoid holding these stances that cannot be held to rational scrutiny by ourselves and others so well that we should avoid making them. That it is possible to get into a situation where we fail to resolve a Third Alternative where we must choose and that making the correct choice, as an altruist/rationalist/etc is important even in these cases. SPECKS or TORTURE seem to be the only choices, pick one.
I maintain however that RANDOM or DEFAULT will always be by the nature of what a choice is, always, logically, available.
*actually I chose DEFAULT/RANDOM but the more I think about it the more I think RANDOM is justified
I suppose you could view the utility as a meaninful object in this frame and abstract away the dust, too, but in the end the dust-utility system is going to encompaps both anyway so solving the problem on either level is going to solve it on both.
It appears I’m less rational than I thought. I suppose another way to rephrase that would be that to draw the outline of VNM-rational decisions only up to preferences that are meaningfully resolvable(and TORTURE vs SPECK does not appear to be to me at least) with a heuristic of how to resolve them clearer given intereaction with unresolveable areas. I would still be making a choice, albeit one with the goal of expanding rational decisionmaking to the utmost possible(it would be rational to be as rational as permissable). That seems pretty cheap though, reeking of ‘explaining everything’. Worse, one interpretation of this dilemma would be that you have to resolve your preferences and that ‘middle’ is excluded, in which case it is a hard problem to which case I can likely offer no further suggestion.
Is not your second link dealt with by http://lesswrong.com/lw/iv/the_futility_of_emergence/ or am I misreading one of the two? It seems to leave the main causal mechanism abstract enough to prove anything.
A sufficiently powerful genie might make safe genies by definition more unsafe. Then your wish could be granted.
edit (2015) caution: I think this particular comment is harmless in retrospect… but I wouldn’t give it much weight
Your second link is broken. In addition to the Internet archive I have posted a blog post inspired by some of my experiences with a cult, containing the article in its entirety for posterity.
Closely related: escallation of commitment While it’s possible to not escalate commitment when you’re in a losing situation, it is often our default tendency.