The proof of Theorem 1 is rather unclear: “high scoring” is ill-defined, and increasing the probability of some favorable outcome doesn’t prove imply that the action is good for u since it can also increase the probability of some unfavorable outcome. Instead, you can easily construct by hand a u s.t. Qu(ha)≠Qu(h∅), using only that a≠∅ (just set u to equal 1 for any history with prefix ha and 0 for any history with prefix h∅).
The definition of stratified Pareto improvement doesn’t seem right to me. You are trying to solve the problem that there are too many Pareto optimal outcomes. So, you need to make the notion of Pareto improvement weaker. That is, you want more changes to count as Pareto improvements so that less outcomes count as Pareto optimal. However, the definition you gave is strictly stronger than the usual definition of Pareto improvement, not strictly weaker (because condition 3 has equality instead of inequality). What it seems like you need is dropping condition 3 entirely.
The definition of almost stratified Pareto optimum also doesn’t make sense to me. What problem are you trying to solve? The closure of a set can only be larger than the set. Also, the closure of an empty set is empty. So, on the one hand, any stratified Pareto optimum is in particular an almost stratified Pareto optimum. On the other hand, if there exists an almost stratified Pareto optimum, then there exists a stratified Pareto optimum. So, you neither refine the definition of an optimum nor make existence easier.
If I’m not mistaken, it was August 16.
I was fixing bugs in the LaTeX and accidentally pressed “save draft” instead of “post”, after which I had to “post” again to make it reappear, and thereby bumped up the date. My apologies for the disturbance in the aether.
Another, very serious issue with LaTeX support: When you copy/paste LaTeX objects, the resulting objects are permanently linked. Editing the content of one of them changes the content of another, which is not visible when editing but becomes visible once you save the post. This one made me doubt my sanity for a moment.
I second the suggestion to state what is being proved before proving it.
One important note is that CDT spectacularly fails this property. Namely, consider a game of matching pennies against a powerful predictor. Since the environment takes actions as input, it’s possible to recompute what would have happened if a different action is plugged in. The CDT agent that keeps losing is going to learn to randomize between actions since it keeps seeing that the action it didn’t take would have done better. So it eventually gets to a state where it predicts the reward from “pick heads” and “pick tails” is 0.5 (because there’s a 50% chance it doesn’t pick heads/tails), but it predicts the reward from “I take an action” is 0, violating this assumption.
Note, however, that ordinary Bayes-optimal RL works perfectly (assuming there are no traps in the prior or paranoia is otherwise avoided), since it would believe that taking a certain action causes the predictor to make the optimal response. This is similar to RL one-boxing in the repeated Newcomb’s problem.
I think that the separation between “AIs that care about the physical world” and “AIs that care only about the Platonic world” is not that clean in practice. The way I would expect an AGI optimizing a toy world to actually work is, run simulations of the toy world and look for simplified models of it that allow for feasible optimization. However, in this way it can stumble across a model that contains our physical world together with the toy world. This model is false in the Platonic world, but testing it using a simulation (i.e. trying to exploit some leak in the box) will actually show that it’s true (because the simulation is in fact running in the physical world rather than the Platonic world). Specifically, it seems to me that such a toy world is safe if and only if its description complexity is lower than the description complexity of physical world + toy world.
Consider a system of linear equations over F2. Brute force search takes time exponential in the dimension. Gaussian elimination takes time polynomial in the dimension, and its description length is O(1). So your hypothesis clearly doesn’t work here. Now, it sounds more plausible if you assume your search problem is NP-hard. The question is whether this is a good model of intelligence. If this is a good model then it means that any intelligent agent will have enormous description complexity, and there is no better way of constructing one than doing brute force search. This probably implies a very long AGI timeline. However, I think it’s more likely that this is a bad model.
“Note that we can represent sequential decision problems in this framework (e.g. Sudoku), elements of A would then be vectors of individual actions.”
Unless the environment is deterministic, you want to consider policies rather than vectors of actions. On a related note, instead of considering a uniform distribution over actions, we might consider a uniform distribution over programs for a prefix-free universal Turing machine. This solves your repeated game paradox in the sense that, the program that always picks 9 will have some finite probability and will do better than your agent for any T, so your agent’s score will be bounded.
The “rare event sampling” link is broken.
Does it mean you don’t want any more bug reports regarding the WYSIWYG LaTeX? Not criticism, just asking.
And another problem: if an inline LaTeX object is located in the end of the paragraph, there seems to be no easy way to place the cursor right after the object, unless the cursor is already there (neither the mouse nor the arrow keys help here). So I have to either delete the object and create it again, or write some text in the next paragraph and then use backspace to join the two paragraphs together. This second solution doesn’t work if there is also an equation LaTeX object after the end of the first paragraph, in which case you can’t use backspace since it would delete the equation object.
That’s nice. Another reason it seems important is, some of content of these essays will eventually make its way into actual papers, and it will be much easier if you can copy-paste big chunks with mild formatting afterwards, compared to having to copy-paste each LaTeX object by hand.
Thank you, that’s very helpful.
Another issue is that it seems impossible to find or find/replace strings inside the LaTeX.
Also, a “meta” issue: in IAFF, the source of an article was plain text in which LaTeX appeared as “$...$” or “$$...$$“. This allowed me to write essays in an external LaTeX editor and then copy into IAFF with a only mild amount of effort. Here, the source seems to be inaccessible. This means that the native editor has to be good because there are no alternatives. Maybe improving the native editor is indeed the best and easiest solution. But an alternative solution could be, somehow enabling work with the source.
Another issue with LaTeX support: when I mark a block of text that contains LaTeX objects and copy-paste it, the LaTeX becomes an unuseful sort of plain text. I can copy the contents of particular LaTeX object by editing it, but sometimes it is very convenient to copy entire blocks.
Yes, I was only talking about alignmentforum, naturally.