Which is maximised when f = 0.5
I calculate f = 5⁄8, not 1⁄2.
Which is maximised when f = 0.5
I calculate f = 5⁄8, not 1⁄2.
The changes I’ve made for this version may seem trivial
Well, in one version you are being extorted for money, whereas in the other version you are merely being bribed. If you buy Eliezer’s theory that you should pay up for bribes but not for extortions (because paying up for bribes increases the probability that people will try to bribe you, which is good, but paying up for extortion increases the probability that people will try to extort you, which is bad), then the difference matters.
Why can’t player 1 just make a really bad move, then switch with player 2 no matter what he plays? That seems like it would give player 1 (who then becomes player 2) a huge advantage.
It’s not right to say that the Copenhagen interpretation means that “only quantum mechanics” is aleatory. First of all, QM describes all physical phenomena so presumably what you meant was “only microscopic phenomena”. But this is not right either, as chaotic dynamical systems send microscopic differences to macroscopic differences and therefore send microscopic aleatory randomness to macroscopic aleatory randomness. It’s possible that there’s even enough chaos in a coin flip to make it aleatorily random.
My conception of mathematics is that you start with a set of axioms and then explore the implications of them. There are infinitely many possible sets of starting axioms you can use [1] .
This is a popular view but in my opinion it is wrong. My conception of math is that you start with a set of definitions and the axioms only come after that, as an attempt to formalize the definitions. For example:
The natural numbers are defined as the objects that you get by starting with a base object “zero” and iterating a “successor operation” arbitrarily many times. Addition and multiplication on the natural numbers are defined recursively according to certain basic formulas. The axioms of Peano arithmetic can then be viewed as simply a way of formalizing these definitions: most of the axioms are just the recursive definitions of addition and multiplication, and the induction schema is an attempt to formalize the fact that all natural numbers result from repeatedly applying the successor operation to 0.
The universe of sets is defined as the collection you get by starting with nothing, and repeatedly growing the collection by at each stage replacing it with the set of all its subsets (i.e. its powerset). The axioms of Zermelo-Fraenkel set theory are an attempt to state true facts about this universe of sets.
Of course, it’s possible to claim that the definitions in question are not valid—they are not “rigorous” in the sense of modern mathematics, i.e. they do not follow from axioms because they are logically prior to axioms. This is particularly true for the definition of the universe of sets, which in addition to being vague has the issues that it presupposes the notion of a “subset” of a collection while we are currently trying to define the notion of a set, and that it’s not clear when we are supposed to “stop” growing the collection (it’s not at “infinity”, because the axiom of infinity implies that we are supposed to continue on past infinity). But Peano arithmetic doesn’t have those problems, and in my opinion is therefore on an epistemologically sound basis. And to be honest much (most?) of modern mathematics can be translated into Peano arithmetic; people use ZFC for convenience but it’s actually not necessary much of the time.
also because sharing the planet with a slightly smarter species still doesn’t seem like it bodes well. (See humans, neanderthals, chimpanzees).
From what I can tell from a quick Google search, current evidence doesn’t show that neanderthals were any less smart than humans.
Yes. If f and g are in the original category and are inverses of each other, the same will be true of any larger category (technically: any category which is the codomain of a functor whose domain is the original category).
OK, maybe if we look at some other definitions of equality we can get a grip on it? In set theory, you say that two sets are equal if they’ve got the same elements. How do you know the elements are the same i.e. equal? You just know.
You are misunderstanding the axiom of extensionality, which states that two sets A and B are equal if both (1) every element of A is an element of B and (2) every element of B is an element of A. This does not require any nebulous notion of “they’ve got the same elements”, and is completely unrelated to the concept of equality at the level of elements of A and B.
By the way, the axiom of extensionality is an axiom rather than a definition; in set theory equality is treated as an undefined primitive, axiomatized as a notion of equality as in first order logic. This is important because if A and B are equal according to the axiom of extensionality, then that axiom implies that A is in some collection of sets C if and only if B is in C.
But if you enrich the category with some more discriminating maps, say distance preserving ones, then the sphere and cube are no longer equal. Conversely, if you reduce the category by removing all the isomorphisms between the sphere and the cube, then they are no longer equal.
Actually you have just described the same thing twice. There are actually fewer distance-preserving maps than there are continuous ones, and restricting to distance-preserving maps removes all the isomorphisms between the sphere and the cube.
So if the climate is moving out of the optimal temperature for the species, it might make sense for you to produce more females, because they are a lower risk strategy?
This seems confused to me. In general, males are more risk-seeking than females because (inclusive) fitness is not a linear function of successfulness at endeavors, with the function being closer to linear for males and more like linear-with-a-cutoff for females. But males and females are still both perfectly risk-neutral when measured in the unit of fitness, since that follows from the definition of expected fitness which is what needs to be greater than average in order for a mutation to propagate throughout a population.
I would expect that if a species has more females than males in some circumstances, then either it is because females are cheaper to raise for some reason, or else that it is due to a fact of biology that the DNA can’t really control directly.
There are some repeated paragraphs:
Elaine nodded. “Tell me, suppose that instead you had a hundred times as many wolves captured, and brought to those forests for release—what would happen then?”
Elaine looked a little surprised, before her face went expressionless again. “Yes, that’s so. Like you said, there’s no Magic powerful enough to directly oppress the farmers and shopkeepers of a whole country. So we’re not looking for a straightforward curse, but some new factor that has changed Santal’s balancing point.”
Let’s talk about a specific example: the Ultimatum Game. According to EY the rational strategy for the responder in the Ultimatum Game is to accept if the split is “fair” and otherwise reject in proportion to how unfair he thinks the split is. But the only reason to reject is to penalize the proposer for proposing an unfair split—which certainly seems to be “doing something conditional on the other actor’s utility function disvaluing it”. So why is the Ultimatum Game considered an “offer” and not a “threat”?
Yeah, but what does “purposefully minimize someone else’s utility function” mean? The source code just does stuff. What does it mean for it to be “on purpose”?
It all depends on what you mean by “sufficiently intelligent / coherent actors”. For example, in this comment Eliezer says that it should mean actors that “respond to offers, not to threats”, but in 15 years no one has been able to cash out what this actually means, AFAIK.
Here’s Joe Carlsmith making the second argument: https://joecarlsmith.com/2022/01/17/the-ignorance-of-normative-realism-bot
It is often said that: “The conclusions of deductive reasoning are certain, whereas those of inductive reasoning are probable”. I think this contrast is somewhat misleading and imprecise, as the certainty of deductive conclusions just means that they necessarily follow from the premises (they are implied by the premises), but the conclusion itself might still be probabilistic.
Example: “If I have a fever, there’s a 65% probability that I have the flu. I have a fever. Therefore, there’s a 65% probability that I have the flu.”
There’s something off about this example. In deductive reasoning, if A implies B, then A and C together also imply B. But if A is “I have a fever” and C is “I have the flu” then A and C do not imply “there’s a 65% probability that I have the flu” (since actually there is a 100% chance).
I think what is going on here is that the initial statement “If I have a fever, there’s a 65% probability that I have the flu” is not actually an instance of material implication (in which case modus ponens would be applicable) but rather a ceteris paribus statement: “If I have a fever, then all else equal there’s a 65% probability that I have the flu.” And then the “deductive reasoning” part would go “I have a fever. And I don’t have any more information relevant to whether I have the flu than the fact that I have a fever. Therefore, there’s a 65% probability that I have the flu.”
Depends on how dysfunctional the society is.
You’re right that with the right reference class, SSA doesn’t imply the doomsday argument. This sensitivity to a choice of reference class is one of the big reasons not to accept SSA.
Basically both of these arguments will seem obvious if you fall into camp #2 here, and nonsensical if you fall into camp #1.
Memento is easily one of the best movies about “rationality as practiced by the individual” ever made. [...] When the “map” is a panoply of literal paper notes and photographs, and the “territory” is further removed from one’s lived experience than usual… it behooves one to take rationality, bias, motivated cognition, unquestioned assumptions, and information pretty damn seriously!
Wasn’t the main character’s attempt at “rationality as practiced by the individual” kind of quixotic though? I didn’t get the impression that the moral of the story was “you should be like this guy”. He would have been better off not trying any complicated systems and just trying to get help for his condition in a more standard way...
Never mind, for some reason I thought you were being offered a 2:1 payout as well as lopsided odds; that doesn’t appear to be the case.