When I was younger...

# MrMind

How probable is that someone knows their internal belief structure? How probable is that someone who knows their internal belief structure tells you that truthfully instead of using a self-serving lie?

The causation order in the scenario is important. If the mother is instantly killed by the truck, then she cannot feel any sense of pleasure after the fact. But if you want to say that the mother feels the pleasure during the attempt or before, then I would say that the word “pleasure” here is assuming the meaning of “motivation”, and the points raised by Viliam in another comment are valid, it becomes just a play on words, devoid of intrinsic content.

So far, Bayesian probability has been extended to infinite sets only as a limit of continuous transfinite functions. So I’m not quite sure of the official answer to that question.

On the other hand, what I know is that even common measure theory cannot talk about the probability of a singleton if the support is continuous: no sigma-algebra on supports the atomic elements.

And if you’re willing to bite the bullet, and define such an algebra through the use of a measurable cardinal, you end up with an ultrafilter that allows you to define infinitesimal quantities

Under the paradigm of probability as extended logic, it is wrong to distinguish between empirical and demonstrative reasoning, since classical logic is just the limit of Bayesian probability with probabilities 0 and 1.

Besides that, category theory was born more than 70 years ago! Sure, very young compared to other disciplines, but not *so* young. Also, the work of Lawvere (the first to connect categories and logic) began in the 70′s, so it dates at least forty years back.

That said, I’m not saying that category theory cannot in principle be used to reason about reasoning (the effective topos is a wonderful piece of machinery), it just cannot say that much right now about Bayesian reasoning

Yeah, my point is that they aren’t truth values per se, not intuitionistic or linear or MVs or anything else

I’ve also dabbled into the matter, and I have two observation:

I’m not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether. Sure, operations on truth values should turn into operations on probabilities, but their underlying logic is different (probabilities after all should be measures, while truth values are algebras)

While 0 and 1 are not (good) epistemic probabilities, they are of paramount importance in any model of probability. For example, P(X|X) = 1, so

^{0}⁄_{1}should be included in any model of probability

The way it’s used in the set theory textbooks I’ve read is usually this:

define a function

*successor*on a set S:*assume*the existence of an*inductive*set that contains a set and all its successors. This is a weak and very limited form of infinite induction.Use Replacement on the inductive set to define a

*general*form of transfinite recursion.Use transfinite recursion and the union operation to define the step “taking the limit of a sequence”.

So, there is indeed the assumption of a kind of infinite process before the assumption of the existence of an infinite set, but it’s not (necessarily) the ordinal . You can’t also use it to deduce anything else, you still need Replacement. The same can be said for the existence and uniqueness of the empty set, which can be deduced from the axioms of Separation.

This approach is not equivalent nor weaker to having fiat transfinite recursion , it’s the only correct way if you want to make the least amount of new assumptions.

Anyway, as far as I can tell, having a well defined theory of sets is crucial to the definitions of surreals, since they are based on set operations and ontology, and use infinite sets of every kind.

On the other hand, I don’t understand your problem with the impredicativity of the definitions of the surreals. These are often resolved into recursive definitions and since ZF-sets are well-founded, you never run into any problem.

> Transfinite induction does feel a bit icky in that finite prooflines you outline a process that has infinitely many steps. But as limits have a similar kind of thing going on I don’t know whether it is any ickier.

Well, transfinite induction / recursions is reduced to (at least in ZF set theory) the existence of an infinite set and the Replacement axioms (a class function on a set is a set). I suspect you don’t trust the latter.

The first link in the article is broken...

Obviously, only the wolves that survive.

Beware of the selection bias: even if veterans show more productivity, it could just be because the military training has selected those with higher discipline

The diagram at the beginning is very interesting. I’m curious about the arrow from relationship to results… care to explain? It refers to joint works or collaborations?

On the other hand, it’s not surprising to me that AI alignment is a field that requires much more research and math than software writing skills… the field is completely new and not very well formalized yet, probably your skill set is misaligned with the need of the market

> The first thing that you must accept in order to seek sense properly is the claim that minds actually make sense

This is somewhat weird to me. Since Kahneman & Tverski, we know that system 2 is mostly good at rationalizing the actions taken by system 1, to create a self-coherent narrative. Not only thus minds generally don’t make any sense, my minds in general lacks any sense. I’m here just because my system 1 is well adjusted to this modern environment, I don’t *need* to make any sense.

From this perspective, “making sense” appears to be a tiring and pointless exercise...

Isn’t “just the right kind of obsession” a natural ability? It’s not that you can orient your ‘obsessions’ at will...

Two of my favorite categories show that they really are everywhere: the free category on any graph and the presheaves of gamma.

The first: take any directed graph, unfocus your eyes and instead of arrows consider paths. That is a category!

The second: take any finite graph. Take sets and functions that realize this graph. This is a category, moreover you can make it dagger-compact, so you can do quantum mechanics with it. Take as the finite graph gamma, which is just two vertex with two arrows between them. Sets and functions that realize this graph are… any graph! So, CT allows you to do quantum mechanics with graphs.

Amazing!

Lambda calculus is though the internal language of a very common kind of category, so, in a sense, category theory allows lambda calculus to do computations not only with functions, but also sets, topological spaces, manifolds, etc.

While I share your enthusiasm toward categories, I find suspicious the claim that CT is the correct framework from which to understand rationality. Around here, it’s mainly equated with Bayesian Probability, and the categorial grasp of probability or even measure is less than impressive. The most interesting fact I’ve been able to dig up is that the Giry monad is the codensity monad of the inclusion of convex spaces into measure spaces, hardly an illuminating fact (basically a convoluted way of saying that probabilities are the most general ways of forming convex combinations out of measures).

I’ve searched and searched for categorial answers or hints about the problem of extending probabilities to other kinds of logic (or even simply extending it to classical predicate logic), but so far I’ve had no luck.

The difference between the two is literally a single summation, so… yeah?

The fact is surprising when coupled with the fact that particles do not have a definite spin direction before you measure it. The anti-correlation is maintained non-locally, but the directions are decided by the experiment.

A better example is: take two spheres, send them far away, then make one sphere spin in any orientation that you want. How much would you be surprised to learn that the other sphere spins with the same axis in the opposite directions?