Technically, this is what universe was supposed to mean, before people started using the word multiverse. You could try using universe if the readership of the Mirozdanie Dossier isn’t too familiar with the idea of the multiverse, or you could use something like ‘cosmos’ or ‘totality’.
endoself
This was posted a few day ago. See http://lesswrong.com/r/discussion/lw/3jx/the_decline_effect_and_the_scientific_method_link/ .
Good point. In retrospect there was nothing exceptional about their misunderstanding of their own minds. I do, however, disagree with an unconditional condemnation of suicide due to the possibility of of a positive singularity. Just because we can’t see the future doesn’t mean we can’t make a judgment under uncertainty. Some probability of a fate worse than death must cancel a sufficiently low probability of whatever good experiences are possible. Also, if a sufficiently large amount of money is necessary to prolong someone’s life, perhaps that money could be better spent on improving the chance of a positive singularity for everyone, depending on the exact results of the expected utility calculation.
This is actually an undecidable question. If you say find me the shortest program that does ‘x’ for sufficiently complex x, there will be shorter programs that output nothing forever, but which cannot be proved to halt, due to the halting problem. This can be fixed by imposing resource constraints on the program or saying “make it as short as you can”, if the AI understands such things. Presumably, if you input this request as stated, the AI would tell you it could not solve it and nothing more, so other posters should keep this problem in mind.
That’s not even rational, it’s affirming the consequent.
Master 6 5
Don’t add me to the turn structure. I’ll post no more frequently then once per round, but I don’t want to hold up the game.
I seem to represent P → Q and ~Q → ~P the same way in my mind, but giving the resulting fallacies different names reduces ambiguity, so I guess this is a useful distinction.
Fixed, thanks.
I don’t think this is the same distinction. Instrumental vs. terminal is not specific to humans, but this seems to be about how different types of motivation affect human psychology. Goals seem to correspond to far mode motivation, abstractly causing something to be planned for in the long term, while rewards are near-mode; they are explicitly caused by certain actions and motivate immediate action. Rewards also seem to be the kind of thing that behavioral psychology describes, and that can be harnessed using the techniques in http://lesswrong.com/lw/2dg/applying_behavioral_psychology_on_myself/ .
iufveruiovvebiroabuibeiuvbeurbv
I think the implication that the cessation of existence was immediate was at least as strongly implied as that the exchange took place in French.
I think people are more responsive to this kind of conditioning when they know they are signaling an agreement than when they actually have a disagreement, especially because downvoting makes the signaling appear useless to the signaler.
Humans are the only things capable of reliably generating things that seem non-obvious to humans. The only reason the universe seems so good at it is because we pay less attention to the obvious things. I don’t think we can improve on this issue easily enough for it to be useful for the purposes of the game.
ioqfoipvhnjoilrfhioebhreiohvr
eoiaslrfasjrbva
I don’t think I derived this implication from the `I think, therefore I am,′ I think I got it from how it happened right after, though I can’t be sure about that specific instance of causation in my brain.
A truth table, for better or worse, contains no field for “strong implication contradicted”.
Best summary of the justification for Bayesian AI I’ve ever heard.
Yes, but only to the SIAI, due to the standard optimal philanthropy argument.
Oh wait, is there income tax if you work for a charity? In that case, rather than donating, just ask for salary reductions. Use http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/ to make it still feel like donating, if you find that useful. The only exception is if there is some kind of donation matching that you don’t think will reach its limit unless you take your full salary and donate some of it back (or one without a limit—do they do that?). This might cause a conflict of interest though, especially with an unlimited donation matching drive, as SIAI could just give you a $1 million salary and have you donate it all back with donation matching. I can’t see any situation in which this won’t cause enough bad press to outweigh the monetary benefit, though.
Denying the antecedent with P and Q:
P → Q
~P
Therefore ~Q
Affirming the consequent with ~Q and ~P
~Q → ~P
~P
Therefore ~Q
Wow, I feel kind of bad just writing those chains of “deduction”. Anyways, the same result was concluded from the same minor premise, the only difference is the major premise, and P → Q and ~Q → ~P are equivalent.
edit: formatting
That depends on the definition of same. All fallacies imply each other, but the premises and conclusions in these two should be represented identically by a computer.
I agree, and I think many here will.
Interestingly, this is something people expect to agree on more than they actually do. Most people agree that there could be a fate worse than death, but some people would choose to endure anything to keep living, though I don’t know how many of them would maintain this choice once they had to endure a fate worse than death and I don’t see any ethical way of finding out. Both groups, at least from what I’ve seen, see their choice as obvious and are surprised at the existence of people who disagree.