# Oscar_Cunningham

Karma: 6,161
• You’re probably right, but I can think of the following points.

Its rule is more complicated than Life’s, so its worse as an example of emergent complexity from simple rules (which was Conway’s original motivation).

It’s also a harder location to demonstrate self replication. Any self replicator in Critters would have to be fed with some food source.

• Yeah, although probably you’d want to include a ‘buffer’ at the edge of the region to protect the entity from gliders thrown out from the surroundings. A 1,000,000 cell thick border filled randomly with blocks at 0.1% density would do the job.

• This is very much a heuristic, but good enough in this case.

Suppose we want to know how many times we expect to see a pattern with n cells in a random field of area A. Ignoring edge effects, there are A different offsets at which the pattern could appear. Each of these has a 1/​2^n chance of being the pattern. So we expect at least one copy of the pattern if n < log_2(A).

In this case the area is (10^60)^2, so we expect patterns of size up to 398.631. In other words, we expect the ash to contain any pattern you can fit in a 20 by 20 box.

• Most glider guns in random ash will immediately be destroyed by the chaos they cause. Those that don’t will eventually reach an eater which will neutralise them. But yes, such things could pose a nasty surprise for any AI trying to clean up the ash. When it removes the eater it will suddenly have a glider stream coming towards it! But this doesn’t prove it’s impossible to clear up the ash.

• See here https://​​conwaylife.com/​​forums/​​viewtopic.php?f=7&t=1234&sid=90a05fcce0f1573af805ab90e7aebdf1 and here https://​​discord.com/​​channels/​​357922255553953794/​​370570978188591105/​​834767056883941406 for discussion of this topic by Life hobbyists who have a good knowledge of what’s possible and not in Life.

What we agree on is that the large random region will quickly settle down into a field of ‘ash’: small stable or oscillating patterns arranged at random. We wouldn’t expect any competitior AIs to form in this region since an area of 10^120 will only be likely to contain arbitrary patterns of sizes up to log(10^120), which almost certainly isn’t enough area to do anything smart.

So the question is whether our AI will be able to cut into this ash and clear it up, leaving a blank canvas for it to create the target pattern. Nobody knows a way to do this, but it’s also not known to be impossible.

Recently I tried an experiment where I slowly fired gliders at a field of ash, along twenty adjacent lanes. My hope had been that each collision of a glider with the ash would on average destroy more ash than it created, thus carving a diagonal path of width 20 into the ash. Instead I found that the collisions created more ash, and so a stalagmite of ash grew towards the source at which I was creating the gliders.

• Another good one is the spell ‘Assume for contradiction!’, which when you are trying to prove p gives you the lemma ¬p.

• The rule in modal logic is that we can get ⊢□p from ⊢p, not that we can get □p from p.

True:

If PA proves p, then PA proves that PA proves p.

False:

If p, then PA proves p.

EDIT: Maybe it would clarify to say that ‘⊢p’ and ‘□p’ both mean ‘PA (or whichever theory) can prove p’, but ‘⊢’ is used when talking about PA, whereas ‘□’ is used when talking within PA.

• From our vantage point of ZFC, we can see that PA is in fact consistent. But we know that PA can’t prove its own consistency or inconsistency. So the classic example of a system which is consistent but unsound is PA + ¬Con(PA). This system is consistent since deriving a contradiction in it would amount to a proof by contradiction of consistency in PA, which we know is impossible. But it’s unsound since it falsely believes that PA is not consistent.

Your proof of ‘consistency → soundness’ goes wrong in the following way:

Suppose no soundness: ¬(□p→p); then □p∧¬p.

This is correct. But to be more clear, a theory being unsound would mean that there was some p for which the sentence ‘□p∧¬p’ was true, not that there was some p for which the sentence ‘□p∧¬p’ was provable in that theory. So then in the next line

From ¬p, by necessitation □¬p

we can’t apply necessitation, because we don’t know that our theory proves ¬p, only that p is false.

• You can’t write down ‘∀p: Provable(p)→p’ in PA, because in order to quantify over sentences we have to encode them as numbers (Gödel numbering).

We do have a formula Provable, such that when you substitute in the Gödel number p you get a sentence Provable(p) which is true if and only if the sentence p represents is provable. But we don’t have a formula True, such that True(p) is true if and only if the sentence p represents is true. So the unadorned p in ‘∀p: Provable(p)→p’ isn’t possible. No such formula is possible since otherwise you could use diagonalization to construct the Liar Paradox: p⟷¬True(p) (Tarski’s undefinability theorem).

What we can do is write down the sentence ‘Provable(p)→p’ for any particular Gödel number p. This is possible because when p is fixed we don’t need True(p), we can just directly use the sentence p represents. I think of this as a restricted version of soundness: ‘Soundness at p’. Then Löb’s theorem tells us precisely which p PA is sound at. It’s precisely the p which PA can prove.

• I think you’ve got one thing wrong. The statement isn’t consistency, it’s a version of soundness. Consistency says that you can’t prove a contradiction, in symbols simply . Whereas soundness is the stronger property that the things you prove are actually true, in symbols . Of course first order logic can’t quantify over sentences, so you can’t even ask the question of whether PA can prove itself sound. But what you can ask is whether PA can prove itself sound for particular statements, i.e. whether it can prove for some s.

What Löb’s theorem says is that it can only do this for a trivial class of s, the ones that PA can prove outright. Obviously if PA can prove then it can prove (or indeed for any ). Löb’s theorem tells you that these obvious cases are the only ones for which you can prove PA sound.

• Tails can alternate between fat and thin as you go further out. If heights were normally distributed with the same mean and variance then there would be fewer people above 7ft than there are now, but the tallest man would be taller than the tallest man now.

• North Korea were caught cheating in 1991 and given a 15 year ban until 2007. They were also disqualified from the 2010 IMO because of weaker evidence of cheating. Given this, an alternative hypothesis is that they have also been cheating in other years and weren’t caught. The adult team leaders at the IMO do know the problems in advance, so cheating is not too hard.

• One other argument I’ve seen for Kelly is that it’s optimal if you start with $a and you want to get to$b as quickly as possible, in the limit of b >> a. (And your utility function is linear in time, i.e. -t.)

You can see why this would lead to Kelly. All good strategies in this game will have somewhat exponential growth of money, so the time taken will be proportional to the logarithm of b/​a.

So this is a way in which a logarithmic utility might arise as an instrumental value while optimising for some other goal, albeit not a particularly realistic one.