If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…

# vi21maobk9vp

You say “presumably yes”. The whole point of this discussion is to listen to everyone who will say “obviously no”; their arguments would automatically apply to all weaker boxing techniques.

How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?

Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.

I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.

Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).

Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).

Could you please clarify your question here?

Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts?

I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.

I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.

Link is good, but I guess direct explanation of this simple thing could be useful.

It is not hard to build explicit map between R and R² (more or less interleaving the binary notations for numbers).

So the claim of Continuum Hypothesis is:

For every property of real numbers P there exists such a property of pairs of real numbers, Q such that:

1) ∀x (P(x) → ∃! y Q(x,y))

2) ∀x (¬P(x) → ∀y¬Q(x,y))

(i.e. Q describes mapping from support of P to R)

3) ∀x1,x2,y: ((Q(x1,y)^Q(x2,y)) → x1=x2)

(i.e. the map is an injection)

4) (∀y ∃x Q(x,y)) ∨ (∀x∀y (Q(x,y)-> y∈N))

(i.e. map is either surjection to R or injection to a subset of N)

These conditions say that every subset of R is either the size of R or no bigger than N.

ZFC is the universally, unequivocally best definition of a set

Worse. You are being tricked into believing that ZFC is at all a definition of a set at all, while it is just a set of restrictions on what we would tolerate.

In some sense, if you believe that there is only one second-order model of natural numbers, you have to make decisions what are the properties of natural numbers that you can range over; as Cohen has taught us, this involves making a lot of set-theoretical decisions with continuum hypothesis being only one of them.

Well Foundation in V_alpha case seems quite simple: you build externally-countable chain of subsets which simply cannot be represented as a set inside the first model of ZFC. So the external WF is not broken because the element-relation inside the models is different, and the inner WF is fine because the chain of inner models of external ZFC is not an inner set.

In the standard case your even-numbers explanation nicely shows what goes on — quoting is involved.

I need to think a bit to say what woud halt our attempts to build a chain of transitive countable models...

Ah, sorry, I didn’t notice that the question is about model of ZFC inside a “universe” modelling ZFC+Con^∞(ZFC)

Actually, in NBG you have explicitness of assumptions and of first-order logic — and at the same time axiom of induction is a single axiom.

Actually, if you care about cardinality, you need a well-specified set theory more than just axioms of reals. Second-order theory has a unique model, yes, but it has the notion of “all” subsets, so it just smuggles some set theory without specifying it. As I understand, this was the motivation for Henkin semantics.

And if you look for a set theory (explicit or implicit) for reals as used in physics, I am not even sur you want ZFC. For example, Solovay has shown that you can use a set theory where all sets of reals are measurable without much risk of contradictions. After all, unlimited axiom of choice is not that natural for physical intuition.

Well, technically not every model of ZFC has a ZFC-modelling element. There is a model of “ZFC+¬Con(ZFC)”, and no element of this monster can be a model of ZFC. Not even with nonstandard element-relation.

It may be that goal-orientation where there are no made-up rules is fun; as a good person there is need to follow some of the more stupid moral norms that made sense some puny two hundred years ago.

It seems that in weak formulations it can be confirmed.

Have you read “Through the Language Glass” by Deutscher?

Choosing better words for some situations does train you in some skills. It looks like people distinguish colours quicker if they have different names. For example, Russian speaker will notice the difference between “closer to sky blue” and “closer to navy blue” faster than English speaker because of habit to classify them as different colours. Deutscher cites a few different studies of that kind.

Apparently, language can also change your default reactions (how you interpret omissions) in the sense that you can set up some scene on the table, then lead a person to another room and ask which table has the same scene as in the first room; whether the language uses north/south or left/right for path descriptions can be seen by answers.

As for applications, it seems to say what you would try anyway — if you want to improve awareness of somethingm encourage saying it out loud every time.

Actually, what you may wonder is whether utility of increased status just has a complex shape for you.

For example, I can imagine some situation of having too little status, but in most cases I get what is enough personally for me before even trying.

Actually, whatever license you use, your content will be copied around.

If you use a proprietary license after taking CC-BY core content, copying your content will be less legal and less immoral.

Pistol to the mouth seems to require full mouth of water for high chance of success.

Everything you said is true.

Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.

But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.

A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.

On the other hand, the better you are, the more things you learn just because they are easy enough to learn to be worth your time.

And you are completely right.

I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.

I.e. walking the walk is harder than talking the talk.