# All the posts I will never write

*This post has been written for the first **Refine** blog post day, at the end of the week of readings, dicussions, and exercises about epistomology for doing good conceptual research.*

*(/with courtesy to Adam Shimi who suggested the title and idea. )*

**Rationality, Probability, Uncertainty, Reasoning**

Failures of The Aumann Agreement Theorem

The famous Aumann Agreement Theorem states that rational reasoners can never agree-to-disagree. In day-to-day life we clearly have many situations where rational reasoners do agree-to-disagree. Are people just bad rationalist or are there more fundamental reasons that the Aumann Agreement Theorem can fail.

I review all the ways in which the Aumann Agreement Theorem can fail that I know of—including failures based on indexical information, computational-complexity obstacles, divergent-interpretations-of-evidence, Hansonian non-truth-seeking and more.

Warren Buffet: The Alpha of Wall Street

If we observe a trader that consistenly beat the market that should be evidence against the Efficient Market Hypothesis.

A trader could also just have been lucky. How much should we update against the EMH and how much should we expect the trader to beat the market in the future?

Can we quantify how much information the market absorbed? This is very reminiscent of Wows Bayesian surprise in Bayesian statistics.

The Bid-Ask Spread and Epistemic Uncertainty/ Prediction Markets as Epistemic Fog of War

If you know

**A**you should**will resolve**shares on**buy****A**, if you know**not A**will happen you should buy shares on**not A**. If you think**A**willyou should**not resolve**shares on**sell****A.**The Bid-ask Spread measures bet resolution uncertaintySuppose an adversary has an interest in showing you

**A**if**A**happens and for it not to resolve if**not A,**i.e.**selective reporting.****buy****A**and**sell****not A**.When an earnings call come in.. bid-ask spread increases.

Where Forecasting goes Wrong...

Forecasting is now a big deal in the rationalist community. I argue that a slavish adherence to Bayesian Orthodoxy leads to missing most of the value of prediction markets.

What do we mean when we talk about Probability?

Possibility theory is prior to Probability Theory. Probability theory is possibility theory + Cournot’s principle.

Cournot’s principle that an epsilon/zero probability possible event will never happen is the fundamental principle of probability theory.

cf Shafer on history of Cournot’s principle.

But what happens when we observe a epsilon/zero probability event? We obtain a contradiction requiring belief revision.

Wow!1! I made a productive mistake

An exposition of ‘Bayesian Surprise attract Human Attention’, focused on the notion of ‘Bayesian surprisal’ measured in wows. There has been a ton of interest in Predictive Processing and Friston’s Free Energy Principle. The discussion is often hampered by equivocation between different quantitites that are the log of something. This post will try to clearly disambiguate between these notions and give both mathematical and intuitive explanations.

I argue that the notion of a ‘productive mistake’ can be formalized as the choice to engage with a high wow high entropy source.

Compare risk-seeking behaviour in MaxCausalEnt and Schmidthuber’s Artificial Curiosity?

**Foundations of Reasoning**

Atocha Aliseda’s Axioms of Abduction

A book review of Aliseda’s underappreciated ‘Abductive Reasoning’.

From SEP: In the philosophical literature, the term “abduction” is used in two related but different senses. In both senses, the term refers to some form of explanatory reasoning. However, in the historically first sense, it refers to the place of explanatory reasoning in

*generating*hypotheses, while in the sense in which it is used most frequently in the modern literature it refers to the place of explanatory reasoning in*justifying*hypotheses.Abduction is a form of explanatory reasoning. Instead of reasoning from premisses to conclusions like deduction, it is the study of possible premises or explanations for given conclusions or observations.

Modern logic & proof theory is highly developed and provides many formal models for deduction. In contrast, there has been a paucity of formal systems for abduction has been not seen (but see abductive logic programming)

Aliseda investigates a formal system of abduction that I judge of intense interest.

A sign of the significance of this system is that it is also able to accomodate ‘Belief Revision’ the study of how to resolve contradictions. Given a contradictions we can ask what possible assumptions we could reject to obtain a consistent system again. This is the study of belief revision. There are many connections with fundamental concerns in alignment.

Conceptual Splintering

Conceptual splintering is a phenomena in mathematics where two syntactically different but semantically equivalent definitions splinter into semantically nonequivalent definitions when the background theory is generalized (made weaker). I give many examples from maths.

Cf. Armstrong’s argument that the related model splintering is the central problem in alignment.

Generative versus Recognizable concepts

For some concepts examples can be generated, for other examples can only be recognized.

I investigate the many variants on these notions and investigate links with predicative mathematics.e

^{[1]}/Links:

- IDA, Rob Miles

**Vibes of Mathematics**

Vibes of Mathematics

meta-sociology of different fields of mathematics. very touchy feely. No proofs just vibes.

The Rising Sea

Meditation on the rising sea metaphor for the value of abstraction in mathematics.

Coming Revolutions in Mathematics: Percolations, Phase Transitions and the Synthetic Wave

The story of mathematics in 20th century has been of increasing specialisation. I argue that we are currently at the beginning of a grand reversal of this trend. The connections and corrrelations between differents fields of mathematisc

Synthetic mathematics is terribly confusing name for the mathematics version of domain-specific languages where the objects of study are defined intrinsically. Compare the way Euclid defines a lines by the way they are used (“synthetic”) versus the ‘standard’ way to define them as an infinite sequence of points in (“analytic”)

Synthetic differential geometry, synthetic topology , synthetic homotopy theory, synthetic Euclidean geometry.

Synthetic Homotopy theory aka Homotopy type theory goes further—not only does it provide a domain-specific-language for homotopy theory, in fact it can serve as a highly elegant foundation of mathematics wherein like the traditional ZFC mathematics itself can be developed. It is however much more well-behaved than ZFC as well as being natively constructive. This is all quite surprising; homotopy theory is a extremely specialized branch of mathematics that on first, second and third glance has nothing to do with sets or foundations of mathematics!

The Synthetic Wave is a coming revolution in mathematics where these domain-specific-languages (” Synthetic Mathematics”) will ‘reappear’ at the foundations.

ULTIMATE! type theory

Homotopy type theory was just the beginning. Ultimate type theory is a not-yet-existing- but-immanent&imminent foundations of mathematicsthat will unify HoTT, Linear Logic, Differential Linear Logic and Differental Calculus I & II, predicative

^{[1]}/synthetic topology, game semantics I & II , ludics, pi-calculus

Why did the Fields Medal become big: The Ourobouros of Prestige

Why did the Fields medal become the defacto prize more prestigious than the other?

The Ouroubouros of Prestige: winners of prestigious prizes become prestigious, prizes derive prestige from prestigious winners.

I argue the Fields medal became prestigious because of two factors: (1) early on the Fields medal was not so prestigious and the committee made surprising-at-the-time choices for workers in fields like algebraic geometry, algebraic topology and that were at the time quite obscure. These picks turned out to be prescient when these fields blew up (2) the age limit means only young researchers are chosen—it is difficult to determine the impact and importance of a young research; the prize therefore gives a lot of information.

**Life, Complexity, Optimisation, Entropy, Death & Decay**

What is Life?

The idea that biological life can be understood as a thermodynamic phenomenon goes back to Clausius and Schrodinger. I survey recent ideas in this space.

I argue that all organisms suffer from rot. There is thermodynamic lower bound on rot. The larger & more complex the organism is the more rot. I argue that biological life solves this fundamental problem by a bounded-error lifecycle strategy. That is death explains life. [

Links: Dissipative structures, Friston but see [ERRORS IN FRISTON], Schrodinger, England, MaxCaliber

Complexity and Optimisation: a failed research programme in Four Acts

An oft-argued point is the qualitative intelligence gap or lack thereof between Ape and Man. Recent evidence mostly points towards human brains being mostly scaled up versions of chimp brains but counterarguments like the lack of a grammar-based ape language are still strong. Is there a fundamentally special feature encoded in human genes that apes don’t have? The gap between humans and apes is quite small on an genetic and an evolutionary time perspective (1.2 percent of DNA, a few millions years of evolution) is quite small but it may still contain a lot of information. Can we quantify how much information evolution could have ‘put into human genes’ over the timespan apes-> humans?

Intelligent designers argue that evolution could never create the complexity that we see in biological life. Evolutionary biologists argue yes she can. But can we actually quantify how much complexity evolution can and cannot create?

Question: How much information-theoretic complexity can evolution create over a given time-period?

Following John Baez’ Information Geometry Lectures we might investigate the KL- divergence and its less-well-known-but-equally-important cousin the Fischer information metric. I explain why it and various related proposals don’t give a satisfying answer.

Nevertheless, I am convinced and argue that a satisfying answer exists.

**infraBook Club**

infraBook Club I: Corrigibility is bad ashkually

A short recap of Vanessa’s brilliant anti-corrigible strategy to prevent the AI from just hooking up the human overseer on heroine. side-effects may include destructive uploading.

infraBook Club II: Inverse Reinforcement Learning & MaxCausalEnt

A discussion of the Inverse Reinforcement Learning part of Vanessa’s PreDCA + possible connections with Maximum Causal Entropy.

infraBook Club III: r\iamverysmart & Minimum Description Length

Vanessa suggests a way to detect the intelligence of an agent. I discuss relations with the minimum description length principle.

infraBook Club IV: The Anti-Theological Schutzwall

Filter out/wall out supersuperintelligence demons in the prior.

A discussion of whether acausal attacks are actually relevant in practice and Abrahamic religion is acausal blackmail, Solononoff prior is malign.

infraBook Club V: Practical Exorcision of Hermeneutically Shrewd Daemons

How to actually exorcise the supersuperintelligent demons? What are the barriers to implement Vanessa’s preDCA in practice?

Possible problems with preDCA, including the danger of a hidden highly distributed intelligent agent that seems stupid and so is not detected by the intelligent precursor filter.

**Miscellaneous**

Bound Overflowing Ossified Knowledge: a survey

Introduction to Applied Hodge Theory

Why do we need mental breaks? Why do we get mentally tired?

Famous scientists often credit dreams and downtime with creative insights. Anecdotally, many people report that they can focus only for limited few-hour time slots for creative focused concious work.

Naively, one would think that the brain is getting tired like a muscle yet the brain -as—muscle might be a misleading analogy. It does not seem to get tired or overexert itself. For instance, the amount of energy used does not significantly vary with the task.

Global Workspace theory suggests that focused conscious reasoning is all about serially integrating summarized computations from many parallel unconcious computing units. After finishing the serial conscious thought the conclusion is backpropagated to unconscious computing units. Subsequently, these unconscious computing units need to spend time to work on the backpropagated conscious thought before there is enough ‘fertile ground’ for futher serial conscious thought.

It could explain why it seems easier to change conscious activities.

Some problems are like those Gel Squishy Toys

Some problems have a ‘conservation of difficulty’ law. This means that direct approaches appear to make progress but ultimately fail. Conservation of difficulty often occur in worse-case analyses/ adversarial context. I suggest AI alignment might be one such problems.

Multistep Fidelity explains Rapid Capability Gain

Many examples of Rapid Capability Gain can be explained by a sudden jump in fidelity of a multi-step error-prone process. As the single step error rate is gradually lowered there is a sudden transition from a low fidelity to a high fidelity regime. Examples abound in cultural transmission, development economics, planning & consciousness in agent, origin of life and more.

- ^
Predicative mathematics is a foundations of mathematics that rejects ‘impredicative’ definitions. Roughly speaking, you can think of predicative mathematics as rejecting the powerset axiom.

- Refine’s First Blog Post Day by 13 Aug 2022 10:23 UTC; 55 points) (
- Refine Blogpost Day #3: The shortforms I did write by 16 Sep 2022 21:03 UTC; 23 points) (
- Refine: what helped me write more? by 25 Oct 2022 14:44 UTC; 12 points) (
- 1 Nov 2022 8:23 UTC; 7 points) 's comment on Draft Amnesty Day: an event we might run on the Forum by (EA Forum;

How many of these blogposts would you write if you had unlimited resources other than time (a full-time editor, research team, even maybe focus groups)?

Love the idea. How efficient! :)

About mental breaks, I guess this might helps creativity for the same reason meditation and naps help partial consolidation of memory traces (see below for a recent thesis showing these effects).

https://qspace.library.queensu.ca/bitstream/handle/1974/27576/Dastgheib_Mohammad_202001_MSC.pdf?sequence=3&isAllowed=y

Specifically, I would speculate that consolidation means reorganizing memories, and that reorganizing memories helps making sense of this information.

This is a great format for a post.

One thing I find is that people focus too much on

failuresof AAT, rather than the much more common case ofsuccesses. I think almost every conversation you have relies on AAT.For instance, I’m the one who cooks dinner in my home, and my girlfriend regularly asks me what options there are for dinner, and believes me when I tell her what the options are.

That she asks me what the options are shows that she, in a probabilistic sense, disagrees with me; I put high probability on a specific set of options that I know we have the ingredients for, while she puts low probability on those options due to not knowing we have the ingredients. She then updates her belief based on my response because she trusts me to be rational (I’m the one who ordered the ingredients/who observed the invoice, and my rationality thus makes me able to know what the food options are) and honest (I wouldn’t e.g. randomly say that the options are spaghetti carbonara, vegan sandwiches or lobster when actually I believe options are burgers, poke, or risotto).

This seems to me to be the basis of lots of conversations; you talk about stuff that you think the other person has experience with, and you trust them to be honest/rational and therefore you update your beliefs to match what they say.

I sometimes get the impression that the rationalist community doesn’t realize that Aumann’s Agreement Theorem works just fine most of the time.

I might steal the exorcism metaphor for the post I probably will write about the complexity prior.

Related to

One of my old blog posts I never wrote (I did not even list it in a “posts I will never write” document) is one about how corrigibility are anti correlated with goal security.

Something like: If you build an AI that don’t resist someone trying to change its goals, it will also not try to stop bad actors from changing its goal. (I don’t think this particular worry applies to Paul’s version of corrigibility, but this blog post idea was from before I learned about his definition.)

The germline doesn’t rot, though. Human egg and sperm-producing cells must maintain (epi-)genomic integrity indefinitely.

Germlines do rot. It’s just countered by branching and pruning faster than the rot.