What about time and date formats? While some formats based on a single scalar value (e.g. Unixtime) are common in certain applications, the system most commonly used by humans to specify the time of day uses at least three different units (hours, minutes and seconds), with a conversion factor (60) that isn’t a power of ten. The rules for calculating dates are even more complicated. Time zones complicate matters further; a single ‘%Y-%m-%d %H:%M:%S’ string doesn’t even unambiguously specify a point in time, unless one already knows what time zone the sender is using. From a purely algorithmical perspective, this is a really poor way of specifying a scalar (approximation; none of the systems mentioned deal with time dilation) value. While it encodes certain additional information (position of a planet relative to the local sun), it also makes performing arithmetic on datetimes a lot more difficult than it needs to be.
Sebastian_Hagen2
If someone firmly believes the end-of-the-world event is going to occur on Apr. 1st. 2020, he should be ready to issue promissory notes to pay any amount on Apr. 2nd, in return for very small (no?) charge. Probably not for no charge, since writing the notes takes some effort, and any rationalist will assign a nonzero (though possibly very small) probability to the apocalypse not occuring in time.
A possible problem with bets made over such long periods is that people who don’t expect gold to become worthless in general by date X, but who do expect it to become worthless for them by date X, may also take the bet, skewing the results. A simple example would be an old hedonist, who strongly expects to die within a few years, doesn’t care at all about what happens to his heirs, and would like some more money to spend while he is alive. Assuming that his prediction holds, by taking a bet that the apocalypse will happen by X, he gets some more money to spend while he is alive, and can then default on his debt by dying.
Phil: Build the set from the used exponents of the powers of two. For instance, 1101[2] = 20 + 22 + 2**3
Minor quibble: since binary 0.1111… is 1, you need a number like 0.1010101… to get an actual counterexample.
Afaict, the original post doesn’t contain any mention of binary fractions. An infinite binary sequence consisting entirely of ones doesn’t represent any finite integer.
Imagine a group of 100,000 people, all of whom fit Bill’s description (except for the name, perhaps). If you take the subset of all these persons who play jazz, and the subset of all these persons who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.
Nitpicking: Concluding that this is a strict inclusion implicitly assumes that there is at least one jazz player who is not an accountant in the original set. Otherwise, the two subsets may still be equal (and thus, equal in size).
josh wrote: Sebastian,
I know a jazz musician who is not an accountant.
Josh, note that it is not sufficient for one such person to exist; that person also has to be in the set of 100,000 people Eliezer postulated to allow one to conclude the strict inclusion between the two subsets mentioned.
One of the most obvious examples of commonly encountered unreliable information are advertisements. Gilbert’s results suggest that knowing that the information in advertisements is highly unreliable doesn’t make you immune to their effects. This suggests that it’s a good idea to avoid perceiving advertisements entirely, especially in situations where you’re trying to concentrate on something else. The obvious way to do this is to aggressively use ad-blockers wherever possible; unfortunately there are still media where this isn’t practical.
I’m horribly confused by this thread.
Eliezer: That I still have to hold open doors for old ladies, even at the cost of seconds, because if I breeze right past them, I lose more than seconds. I have to strike whatever blows against Death I can.
Why? What is wrong with taking an Expected Utility view of your actions? We’re all working with limited resources. If we don’t choose our battles to maximum effect, we’re unlikely to achieve very much.
I understand your primary reason (it’s easier to argue for cryonics if you’re signed up yourself), but that one only applies to people trying to argue for cryonics, and for whom the financial obligation is less of a cost than the time and persuasiveness lost in these arguments.
I don’t understand the secondary reasons at all.
Transform into centerless altruists, and we would have destroyed a part of what we fought to preserve.
Agreed, but 1/(6.6 10*9) isn’t a very large part, and that’s not even considering future lives. An Expected Utility calculation still suggests that if you can exert any non-negligible effect on the probability of a Friendly Intelligence Explosion or it’s timing, that effect will vastly outweigh whatever happens to yourself (according to most common non-egoistical value systems).
J Thomas: You’re neglecting that there might be some positive-side effects for a small fraction of the people affected by the dust specks; in fact, there is some precedent for this. The resulting average effect is hard to estimate, but (considering that dust specks seem to mostly add entropy to the thought processes of the affected persons), would likely still be negative.
Copying g’s assumption that higher-order effects should be neglected, I’d take the torture. For each of the 3^^^3 persons, the choice looks as follows:
1.) A 1/(3^^^3) chance of being tortured for 50 years. 2.) A 1 chance of getting a dust speck.
I’d definitely prefer the former. That probability is so close to zero that it vastly outweighs the differences in disutility.
For those who would pick TORTURE, what about Vassar’s universes of agonium? Say a googolplex-persons’ worth of agonium for a googolplex years.
Torture, again. From the perspective of each affected individual, the choice becomes:
1.) A (10(10100))/(3^^^3) chance of being tortured for (10(10100)) years.
2.) A 1 chance of a dust speck.
(or very slightly different numbers if the (10(10100)) people exist in addition to the 3^^^3 people; the difference is too small to be noticable)I’d still take the former. (10(10100))/(3^^^3) is still so close to zero that there’s no way I can tell the difference without getting a larger universe for storing my memory first.
Tom McCabe wrote:
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.Is there something wrong with viewing this from the perspective of the affected individuals (unique or not)? For any individual instance of a person, the probability of directly experiencing the torture is (10(10100))/(3^^^3), regardless of how many identical copies of this person exist.
Mike wrote:
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day …I’m wondering how you would phrase the daily choice in this case, to get the properties you want. Perhaps like this:
1.) Add a period of (50*365)/3^^^3 days to the time period you will be tortured at the end of your life.
2.) Get a speck.This isn’t quite the same as the original question, as it gives choices between the two extremes. And in practice, this could get rather annoying, as just having to answer the question would be similarly bad to getting a speck. Leaving that aside, however, I’d still take the (ridiculously short) torture every day.
The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.
I don’t think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
For Robin’s statistics:
Torture on the first problem, and torture again on the followup dilemma.relevant expertise: I study probability theory, rationality and cognitive biases as a hobby. I don’t claim any real expertise in any of these areas.
Andrew Macdonald asked:
Any takers for the torture?
Assuming the torture-life is randomly chosen from the 3^^^3 sized pool, definitely torture. If I have a strong reason to expect the torture life to be found close to the beginning of the sequence, similar considerations as for the next answer apply.Recovering irrationalist asks:
OK here goes… it’s this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
The answer depends on whether I expect to make it through the 50 year ordeal without permanent psychological damage. If I know with close to certainty that I will, the answer is BARNEYLOVE. Otherwise, it’s SONGS; while I might still acquire irreversible psychological damage, it would probably take much longer, giving me a chance to live relatively sane for a long time before then.
Unknown wrote: The variation cannot be stopped, since no two physical things will ever be exactly alike.
Assuming we can neglect position in space and movement vectors, different individual elemental particles (e.g. electrons) are already exactly alike. Individual atoms are frequently exactly alike, and the same can be said for molecules. We don’t have a way to precisely copy macroscopic objects yet, but that might change with further technological development. Do you think that Molecular Nanotechnology is fundamentally impossible? If so, why?
And even if you can’t build machines with absolute precision, that doesn’t mean you can’t build them in a way that will prevent them from being actively destructive. If you’re worried that machines you build may become destructive because of construction errors, add a test stage to the construction process, and/or self-test components that will cause the machine to shut down harmlessly when an error is detected. Noticing random corruption isn’t very hard, from a computer science perspective. There are plenty of cheap and effective hash functions available. By adding a bit of redundant data, you can drive the probability of a random corruption being unnotived down to infinitesimal levels.
Unknown, it’s physically possible to have a population with internal variation and replication while ensuring that it doesn’t fall into certain destructive patterns with specialized controls. This is probably not a particularly good example since afaik multicellular organisms don’t normally have much variation in their genetic code, but one of the methods for “outlawing evolution” (as mentioned by Eliezer) in multicellular organisms is Macrophages attacking tumor cells. It’s not perfect, of course; individuals still die from cancer. But that mechanism is something produced by natural selection; an intelligent designer could do much better. You haven’t yet convincingly argued that variation and replication necessarily lead to destructive runaway effects; you might need controls (which natrual selection may never come up with) to prevent it, but that doesn’t make it impossible.
This is a giant cheesecake fallacy. We could, in the future, create a society of beings which are identical down to the last bit. I, and I suspect most other people, would find such a society highly undesirable.
I reject the claim of committing a GCF. My statement was a reply to Unknown’s claim that “no two physical things will ever be exactly alike.”, which appeared in an argument that was specifically not restricted to humans, biological beings or intelligent entities. For this reply I was thinking more along the lines of “MNT assembler” than “>= human-level intelligence”. If you build a technological infrastructure using small self-replicating machines, you probably don’t want them to acquire random mutations without shutting down.
Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree—if they have common knowledge of their probability estimates, they have the same probability estimate.
In addition to what James Annan said, they also both have to know (with very high confidence) that they are in fact honest bayesians. Both sides being honest isn’t enough if either suspects the other of lying.
But if you are one of the little people perched atop a cube, and you know these two facts, there is still a third piece of information you need to make predictions: “Which cube am I standing on?”
This nicely illustrates why discrete uniform probability distributions (as I understand them) over infinite sets don’t work very well. I can’t make sense of this thought experiment. I’ll dump my reasoning below, and would be grateful for any clarifications about what I’m doing wrong.
Assume I’m one of the people on one of those cubes, know about the entire series including the people on them, and haven’t looked at the number below me yet. What’s the probability I’m on the first cube, 1? Well, that’s one possibility, and there’s … countably infinitely many … alternatives, so if that probability isn’t zero, it’s as close as I can make it. The same reasoning applies to every other cube. I know all of the cubes exist, each has a person, and I’m one such person. If this is all I know, I have no particular reason to assign a non-uniform probability distribution over the possible outcomes. So, since I will assign the same probability to finding myself on each of the cubes, that leaves me with the following options:
1) I can assign a probability of zero, which blows up in my face since I have to conclude I won’t find myself on any of the cubes. 2) I can assign a non-zero probability, which blows up in my face since by summing those probabilities I will necessarily get a total probability of greater than one (or any finite number, for that matter).
In line with the general silliness of Roger Wilson’s “Disgusting characters” stories, the character the quoted eulogy was for did, in the end, come back to life (they broke the rules on that one). This doesn’t offer any consolation to us in the real world, of course. Real people are still annihilated, and the same might still be in store for any of us. It’s really time we (as in “the human civilization”) did something about that.
I think the “work on FAI theory” suggestion made in a comment to the previous post was a good one; not because it would yield an FAI design when the answer was passed back through the cronophone, but because the output would get Archimedes working on the most important problem visible to him. Alternatively, if we think in hindsight that Archimedes simply doesn’t have the necessary resources to trigger a major catastrophe, and we want him to focus on doing good instead of not doing bad, that could be modified to “build a seed AI, any seed AI”. Since I’m not currently working on either, I probably shouldn’t be the one speaking that advice, though.