Danish math major

# Oskar Mathiasen

Sorry I see now that i lost half a sentence in the middle. I agree that the notions of early/mid/late game doesn’t map well to real life, and I don’t think there is a good way to do so. I then (meant to) propose the stages of a 4X game as perhaps mapping more cleanly onto one-shot games

I think the most natural definitions are that early game is the part you have memorized, end game is where you can compute to the end (still doing pruning), and mid game is the rest.

So eg in Scrabble the end game is where there are no tiles or few enough tiles in the bag that you can think through all (relevant) combinations of bags.I think perhaps the phases of a 4X game.

Explore: gain information that is relevant for what plan to execute

Expand: Investment phase, you take actions that maximise your growth

Exploit: You slowly start depriotizing growth as the time remaining grows shorter.

Exterminate: You go for your win condition

The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning.

The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.My claim is that either there will be a dutch book, or your actions will be equivalent to the actions you would have taken by following one of those representative utility functions, in which case even though the internals don’t seem like following a utility function they are for the purposes of VNM.

But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.

The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.

I don’t understand how you are using incompleteness. For example, to me the sentence

“agents can make themselves immune to all possible money-pumps for completeness by acting in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’”

Sounds like “agents can avoid all money pumps for completeness by completing their preferences in a random way.” Which is true but doesn’t seem like much of a challenge to completeness.

Can you explain what behavior is allowed under the first but isn’t possible under my rephrasing?

Similarly can we make explicit what behavior counts as two options being incomparable?

It seems to me that FDT has the property that you associate with the “ultimate decision theory”.

My understanding is that FDT says that you should follow the policy which is attained by taking the argmax over all policies of the utility from following that policy (only including downstream effects of your policy).

In these easy examples your policy space is your space of committed actions. In which case the above seems to reduce to the “ultimate decision theory” criterion.

The assumptions made here are not time reversible as the macrostate at time t+1 being deterministic given the macrostate at time t, does not imply that the macrostate at time t is deterministic given the macrostate at time t+1.

So in this article the direction of time is given through the asymmetry of the evolution of macrostates.

I think “book of X” can be usefully “translated” as beliefs about X.

The book of truth is not truth, just like the book of night is not night.I think “book of names” can be read as human categoristion of animals (giving them name). Although other readings do seem plausible.

You might be interested in John Harsanyi on the topic.

He argues that the conclusion achieved in the original position is (average) utilitarianism.I agree that behind the veil one shouldn’t know the time (and thus can’t care differently about current vs future humans). This actually causes further problems for Rawls conception when you project back in time, what if the worst life that will ever be lived has already been lived? Then the maximin principle gives no guidance at all, and in positions of uncertainty it recommends putting all effort in preventing a new minimum from being set.

The concept of Kolmogorov Sufficient Statistic might be the missing piece. (cf Elements of information theory section 14.12)

We want the shortest program that describes a sequence of bits. A particularly interpretable type of such programs is “the sequence is in the set X generated by program p, and among those it is the n’th element”

Example “the sequence is in the set of sequences of length 1000 with 104 ones, generated by (insert program here), of which it is the n~10^144′th element”.We therefore define f(String, n) to be the size of the smallest set containing String which is generated by a program of length n. (Or alternatively where a program of length n can test for membership of the set)

If you plot the logarithm of f(String,n) you will often see bands where the line has slope −1, corresponding to using the extra bit to hardcode one more bit of the index. In this case the longer programs aren’t describing any more structure than the program where the slope started being −1. We call such a program a Kolmogorov minimal statistic.

The relevance is that for a biased coin with each flip independent the Kolmogorov minimal statistic is the bias. And it is often more natural to think about the Kolmogorov minimal statistics.

Then you violate the accurate beliefs condition. (If the world is infact a random mixture in proportion which their beliefs track correctly, then fdt will do better when averaging over the mixture)

I don’t think the quoted problem has that structure.

And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems.

But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S

So S causes one boxing tendencies, and the person putting money in the box looks only at S.

So it seems to be changing the problem to say that the predictor observes your brain/your decision procedure. When all they observe is S which, while causing “one boxing tendencies”, is not causally downstream of your decision theory.

Further if S where downstream of your decision procedure, then fdt one boxes whether or not the path from the decision procedure to the contents of the boxes routes through an agent. Undermining the criticism that fst has implausible discontinuities.

Cant you make the same argument you make in Schwarz procreation by using Parfits hitchhiker after you have reached the city? In which case i think its better to use that example, as it avoids the Heighns criticism.

In the case of implausible discontinuities i agree with Heighn that there is no subjunctive dependence.

Here is a quick diagram of the causation in the thought experiment as i understand it.

We have an outcome which is completely determined by your decision to one box/two box and the predictor decision of whether to but money in the one box.

The Predictor decides based on the presence of a lesion (or some other physical fact)

Your decision how many boxes to take is determined by your decision theory.

And your decision theory is partly determined by the Lesion and partly by other stuff.

Now (my understanding of) the claim is that there is no downstream path from your decision theory to the predictor. This means that applying the do operator on the decision theory node doesn’t change the distribution of the choices of the predictor.

Denmark culled all mink due to worries about a covid strain in mink. It has only recently (January 1 2023) become legal to farm mink in Denmark again.

logical inductors are actually defined by the logical induction criterion. The market bit is there to prove that it is possible to fulfill the criterion.

There is also the somewhat boring answer that probability can refer to anything which obeys the axioms of probability.

Note that coop is a consumer cooperative not an employee cooperative.

https://en.wikipedia.org/wiki/Consumers%27_co-operative

New report from Denmark called “Focusrapport about Covid-19 related hospitalizations during the Covid-19 pandemic”

https://www.ssi.dk/-/media/cdn/files/fokusrapport-om-covid-19-relaterede-hospitalsindlggelser-under-sars-cov-2-epidemien_06012022_1.pdf?la=daIt is sadly in danish, so i will give a translation of the main results section and some of the graphs.

Summary of main results:

Theme 1:

* The older a patient is, the greater the likelihood that he or she will have a covid-19- related hospitalization of 12 hours or more.

* The proportion of short hospital stays of less than 12 hours has been fairly stable in each age group in 2021 with a few fluctuations.

* The median duration of long hospital stays (≥12 hours) has decreased from 5.5 days in March 2020, to 4.4 days in February 2021 and 4.0 days in October 2021.

* (this one is added by me) The average duration of (long?) hospitalizations has decreased from 9.2 days in March 2020, to 7.8 days in February 2021 and 7.2 days in October 2021.Theme 2:

* Among covid-19-related admissions in the period 1 June 2020 − 18 December 2021, 82% were registered with a covid-19 diagnosis, 3% with a respiratory diagnosis or an observable covid-19 diagnosis and 15% with another diagnosis. In the month of December, December 1, 2021 to December 18, 2021, they were corresponding shares resp. 73%, 4% and 23%.

* For all age groups ≥40 years, at least 80% of the admissions were registered with a covid-19 diagnosis. For younger adults and children, the proportion was lower.

* For vaccinated, 75% of admissions were registered with a covid-19 diagnosis, while the proportion among the unvaccinated was 82% in 2021. The proportion of patients registered with a diagnosis incompatible with Covid among vaccinated and unvaccinated were respectively. 21% and 15%.

percent of covid hospitalizations which where longer then 12 hours (in red) vs shorter than 12 hours (in blue) by age group over time.

number of hospitalizations that are covid (top), airways or observation (middle), and other (bottom)

the above categories as a proportion of covid positive hospitalizations over time.And the above categories as percentage by age group (over june-december 2021)

By age and vaccination status (vaccinated is to the left) (there are no vaccinated at ages below 9 in the period)

A few thoughts:

It seems weird that the median time of a long stay is going down, but the percentage of short stays is stable

Updated numbers from today at: https://files.ssi.dk/covid19/omikron/statusrapport/rapport-omikronvarianten-10122021-ek56

Now with English translations.

Only significant change is that hospitalizations are up to 1.4% (18 cases).

There should be daily updates found her https://www.ssi.dk/aktuelt/nyheder/2021

click the newest one and click “læs rapporten her” (read the report here). Which takes you to a page where you can download the rapport for that day

Repost from wordpress blog

status rapport from Denmark

Key numbers: they give numbers of cases by day, and also give cases of omicron as a percentage of other cases.

Of 785 cases in Danish citizens, 76.31% of omicron cases where in double vaxed, 7.13% in triple vaxed, compared to 73.69% double (probably also including triple) vaxed for covid in general. 14.14% unvaxed for omicron 22.93% for general (over the last 7 days)

Rate of hospitalization is 1.15% for omicron (9 cases), and 1.85 in general.

Everything is probably confounded by age and region. (omicron is less prevalent in children and over 65)

sources: https://experience.arcgis.com/experience/aa41b29149f24e20a4007a0c4e13db1d/page/page_5/ https://files.ssi.dk/covid19/omikron/statusrapport/rapport-omikronvarianten-09122021-ke43

Taxing something where the supply or demand is fixed is extremely efficient, and the extent to which purchases stay the same is exactly the extent to which supply or demand is inflexible. The economic inefficiency of a tax comes from the changes in behavior induced by the tax. The difference between a tariff and a sales tax, is that it induces you to buy native products.