Yes. If f and g are in the original category and are inverses of each other, the same will be true of any larger category (technically: any category which is the codomain of a functor whose domain is the original category).
Dacyn
OK, maybe if we look at some other definitions of equality we can get a grip on it? In set theory, you say that two sets are equal if they’ve got the same elements. How do you know the elements are the same i.e. equal? You just know.
You are misunderstanding the axiom of extensionality, which states that two sets A and B are equal if both (1) every element of A is an element of B and (2) every element of B is an element of A. This does not require any nebulous notion of “they’ve got the same elements”, and is completely unrelated to the concept of equality at the level of elements of A and B.
By the way, the axiom of extensionality is an axiom rather than a definition; in set theory equality is treated as an undefined primitive, axiomatized as a notion of equality as in first order logic. This is important because if A and B are equal according to the axiom of extensionality, then that axiom implies that A is in some collection of sets C if and only if B is in C.
But if you enrich the category with some more discriminating maps, say distance preserving ones, then the sphere and cube are no longer equal. Conversely, if you reduce the category by removing all the isomorphisms between the sphere and the cube, then they are no longer equal.
Actually you have just described the same thing twice. There are actually fewer distance-preserving maps than there are continuous ones, and restricting to distance-preserving maps removes all the isomorphisms between the sphere and the cube.
So if the climate is moving out of the optimal temperature for the species, it might make sense for you to produce more females, because they are a lower risk strategy?
This seems confused to me. In general, males are more risk-seeking than females because (inclusive) fitness is not a linear function of successfulness at endeavors, with the function being closer to linear for males and more like linear-with-a-cutoff for females. But males and females are still both perfectly risk-neutral when measured in the unit of fitness, since that follows from the definition of expected fitness which is what needs to be greater than average in order for a mutation to propagate throughout a population.
I would expect that if a species has more females than males in some circumstances, then either it is because females are cheaper to raise for some reason, or else that it is due to a fact of biology that the DNA can’t really control directly.
There are some repeated paragraphs:
Elaine nodded. “Tell me, suppose that instead you had a hundred times as many wolves captured, and brought to those forests for release—what would happen then?”
Elaine looked a little surprised, before her face went expressionless again. “Yes, that’s so. Like you said, there’s no Magic powerful enough to directly oppress the farmers and shopkeepers of a whole country. So we’re not looking for a straightforward curse, but some new factor that has changed Santal’s balancing point.”
Let’s talk about a specific example: the Ultimatum Game. According to EY the rational strategy for the responder in the Ultimatum Game is to accept if the split is “fair” and otherwise reject in proportion to how unfair he thinks the split is. But the only reason to reject is to penalize the proposer for proposing an unfair split—which certainly seems to be “doing something conditional on the other actor’s utility function disvaluing it”. So why is the Ultimatum Game considered an “offer” and not a “threat”?
Yeah, but what does “purposefully minimize someone else’s utility function” mean? The source code just does stuff. What does it mean for it to be “on purpose”?
It all depends on what you mean by “sufficiently intelligent / coherent actors”. For example, in this comment Eliezer says that it should mean actors that “respond to offers, not to threats”, but in 15 years no one has been able to cash out what this actually means, AFAIK.
Here’s Joe Carlsmith making the second argument: https://joecarlsmith.com/2022/01/17/the-ignorance-of-normative-realism-bot
It is often said that: “The conclusions of deductive reasoning are certain, whereas those of inductive reasoning are probable”. I think this contrast is somewhat misleading and imprecise, as the certainty of deductive conclusions just means that they necessarily follow from the premises (they are implied by the premises), but the conclusion itself might still be probabilistic.
Example: “If I have a fever, there’s a 65% probability that I have the flu. I have a fever. Therefore, there’s a 65% probability that I have the flu.”
There’s something off about this example. In deductive reasoning, if A implies B, then A and C together also imply B. But if A is “I have a fever” and C is “I have the flu” then A and C do not imply “there’s a 65% probability that I have the flu” (since actually there is a 100% chance).
I think what is going on here is that the initial statement “If I have a fever, there’s a 65% probability that I have the flu” is not actually an instance of material implication (in which case modus ponens would be applicable) but rather a ceteris paribus statement: “If I have a fever, then all else equal there’s a 65% probability that I have the flu.” And then the “deductive reasoning” part would go “I have a fever. And I don’t have any more information relevant to whether I have the flu than the fact that I have a fever. Therefore, there’s a 65% probability that I have the flu.”
Depends on how dysfunctional the society is.
You’re right that with the right reference class, SSA doesn’t imply the doomsday argument. This sensitivity to a choice of reference class is one of the big reasons not to accept SSA.
Basically both of these arguments will seem obvious if you fall into camp #2 here, and nonsensical if you fall into camp #1.
Memento is easily one of the best movies about “rationality as practiced by the individual” ever made. [...] When the “map” is a panoply of literal paper notes and photographs, and the “territory” is further removed from one’s lived experience than usual… it behooves one to take rationality, bias, motivated cognition, unquestioned assumptions, and information pretty damn seriously!
Wasn’t the main character’s attempt at “rationality as practiced by the individual” kind of quixotic though? I didn’t get the impression that the moral of the story was “you should be like this guy”. He would have been better off not trying any complicated systems and just trying to get help for his condition in a more standard way...
Let’s say my p(intelligent ancestor) is 0.1. Imagine I have a friend, Richard, who disagrees.
No wait, the order of these two things matters. Is P(intelligent ancestor|just my background information) = 0.1 or is P(intelligent ancestor|my background information + the fact that Richard disagrees) = 0.1? I agree that if the latter holds, conservation of expected evidence comes into play and gives the conclusion you assert. But the former doesn’t imply the latter.
What makes certain axioms “true” beyond mere consistency?
Axioms are only “true” or “false” relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all “ordinals”, but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
Is there a meaningful distinction between mathematical existence and consistency?
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
Can we maintain mathematical realism while acknowledging the practical utility of the multiverse approach?
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don’t see why this would generate any contradiction with also being able to talk about a canonical model.
How do we reconcile Platonism with independence results?
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don’t see any difficulty to be reconciled.
I don’t agree that I am making unwarranted assumptions; I think what you call “assumptions” are merely observations about the meanings of words. I agree that it is hard to program an AI to determine who the “he”s refer to, but I think as a matter of fact the meanings of those words don’t allow for any other possible interpretation. It’s just hard to explain to an AI what the meanings of words are. Anyway I’m not sure if it is productive to argue this any further as we seem to be repeating ourselves.
No, because John could be speaking about himself administering the medication.
If it’s about John administering the medication then you’d have to say ”… he refused to let him”.
It’s also possible to refuse to do something you’ve already acknowledged you should do, so the 3rd he could still be John regardless of who is being told what.
But the sentence did not claim John merely acknowledged that he should administer the medication, it claimed John was the originator of that statement. Is John supposed to be refusing his own requests?
John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.
Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.
I don’t think the sentence is actually as ambiguous as you’re saying. The first and third “he”s both have to refer to Mark, because you can only refuse to do something after being told you should do it. Only the second “he” could be either John or Mark.
Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.
Why do you call current AI models “agentic”? It seems to me they are more like tool AI or oracle AI...
From what I can tell from a quick Google search, current evidence doesn’t show that neanderthals were any less smart than humans.