Ok, I see what you mean about independence of irrelevant alternatives only being a real coherence condition when the probabilities are objective (or otherwise known to be equal because they come from the same source, even if there isn’t an objective way of saying what their common probability is).
But I disagree that this makes VNM only applicable to settings in which all sources of uncertainty have objectively correct probabilities. As I said in my previous comment, you only need there to exist some source of objective probabilities, and you can then use preferences over lotteries involving objective probabilities and preferences over related lotteries involving other sources of uncertainty to determine what probability the agent must assign for those other sources of uncertainty.
Re: the difference between VNM and Bayesian expected utility maximization, I take it from the word “Bayesian” that the way you’re supposed to choose between actions does involve first coming up with probabilities of each outcome resulting from each action, and from “expected utility maximization”, that these probabilities are to be used in exactly the way the VNM theorem says they should be. Since the VNM theorem does not make any assumptions about where the probabilities came from, these still sound essentially the same, except with Bayesian expected utility maximization being framed to emphasize that you have to get the probabilities somehow first.
I think you’re underestimating VNM here.
only two of those four are relevant to coherence. The main problem is that the axioms relevant to coherence (acyclicity and completeness) do not say anything at all about probability
It seems to me that the independence axiom is a coherence condition, unless I misunderstand what you mean by coherence?
correctly point out problems with VNM
I’m curious what problems you have in mind, since I don’t think VNM has problems that don’t apply to similar coherence theorems.
VNM utility stipulates that agents have preferences over “lotteries” with known, objective probabilities of each outcome. The probabilities are assumed to be objectively known from the start. The Bayesian coherence theorems do not assume probabilities from the start; they derive probabilities from the coherence criteria, and those probabilities are specific to the agent.
One can construct lotteries with probabilities that are pretty well understood (e.g. flipping coins that we have accumulated a lot of evidence are fair), and you can restrict attention to lotteries only involving uncertainty coming from such sources. One may then get probabilities for other, less well-understood sources of uncertainty by comparing preferences involving such uncertainty to preferences involving easy-to-quantify uncertainty (e.g. if A is preferred to B, and you’re indifferent between 60%A+40%B and “A if X, B if not-X”, then you assign probability 60% to X. Perhaps not quite as philosophically satisfying as deriving probabilities from scratch, but this doesn’t seem like a fatal flaw in VNM to me.
I do not expect agent-like systems in the wild to be pushed toward VNM expected utility maximization. I expect them to be pushed toward Bayesian expected utility maximization.
I understood those as being synonyms. What’s the difference?
I do, however, believe that the single step cooperate-defect game which they use to come up with their factors seems like a very simple model for what will be a very complex system of interactions. For example, AI development will take place over time, and it is likely that the same companies will continue to interact with one another. Iterated games have very different dynamics, and I hope that future work will explore how this would affect their current recommendations, and whether it would yield new approaches to incentivizing cooperation.
It may be difficult for companies to get accurate information about how careful their competitors are being about AI safety. An iterated game in which players never learn what the other players did on previous rounds is the same as a one-shot game. This points to a sixth factor that increases chance of cooperation on safety: high transparency, so that companies may verify their competitors’ cooperation on safety. This is closely related to high trust.
I object to the framing of the bomb scenario on the grounds that low probabilities of high stakes are a source of cognitive bias that trip people up for reasons having nothing to do with FDT. Consider the following decision problem: “There is a button. If you press the button, you will be given $100. Also, pressing the button has a very small (one in a trillion trillion) chance of causing you to burn to death.” Most people would not touch that button. Using the same payoffs and probabilies in a scenario to challenge FDT thus exploits cognitive bias to make FDT look bad. A better scenario would be to replace the bomb with something that will fine you $1000 (and, if you want, also increase the chance of of error).
But then, it seems to me, that FDT has lost much of its initial motivation: the case for one-boxing in Newcomb’s problem didn’t seem to stem from whether the Predictor was running a simulation of me, or just using some other way to predict what I’d do.
I think the crucial difference here is how easily you can cause the predictor to be wrong. In the case where the predictor simulates you, if you two-box, then the predictor expects you to two-box. In the case where the predictor uses your nationality to predict your behavior, Scots usually one-box, and you’re Scottish, if you two-box, then the predictor will still expect you to one-box because you’re Scottish.
But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S...
I didn’t think that was supposed to matter at all? I haven’t actually read the FDT paper, and have mostly just been operating under the assumption that FDT is basically the same as UDT, but UDT didn’t build in any dependency on external agents, and I hadn’t heard about any such dependency being introduced in FDT; it would surprise me if it did.
I don’t know if I’m a simulation or a real person.
A possible response to this argument is that the predictor may be able to accurately predict the agent without explicitly simulating them. A possible counter-response to this is to posit that any sufficiently accurate model of a conscious agent is necessarily conscious itself, whether the model takes the form of an explicit simulation or not.
I think the counterfactuals used by the agent are the correct counterfactuals for someone else to use while reasoning about the agent from the outside, but not the correct counterfactuals for the agent to use while deciding what to do. After all, knowing the agent’s source code, if you see it start to cross the bridge, it is correct to infer that it’s reasoning is inconsistent, and you should expect to see the troll blow up the bridge. But while deciding what to do, the agent should be able to reason about purely causal effects of its counterfactual behavior, screening out other logical implications.
Also, counterfactuals which predict that the bridge blows up seem to be saying that the agent can control whether PA is consistent or inconsistent.
Disagree that that’s what’s happening. The link between the consistency of the reasoning system and the behavior of the agent is because the consistency of the reasoning system controls the agent’s behavior, rather than the other way around. Since the agent is selecting outcomes based on their consequences, it does make sense to speak of the agent choosing actions to some extent, but I think speaking of logical implications of the agent’s actions on the consistency of formal systems as “controlling” the consistency of the formal system seems like an inappropriate attribution of agency to me.
I suppose why that’s not why we’re minimizing determinant, but rather frobenius norm.
Yes, although another reason is that the determinant is only defined if the input and output spaces have the same dimension, which they typically don’t.
First, a vector can be seen as a list of numbers, and a matrix can be seen as an ordered list of vectors. An ordered list of matrices is… a tensor of order 3. Well not exactly. Apparently some people are actually disappointed with the term tensor because a tensor means something very specific in mathematics already and isn’t just an ordered list of matrices. But whatever, that’s the term we’re using for this blog post at least.
It’s true that tensors are something more specific than multidimensional arrays of numbers, but Jacobians of functions between tensor spaces (that being what you’re using the multidimensional arrays for here) are, in fact, tensors.
What this means is for the Jacobian is that the determinant tells us how much space is being squished or expanded in the neighborhood around a point. If the output space is being expanded a lot at some input point, then this means that the neural network is a bit unstable at that region, since minor alterations in the input could cause huge distortions in the output. By contrast, if the determinant is small, then some small change to the input will hardly make a difference to the output.
This isn’t quite true; the determinant being small is consistent with small changes in input making arbitrarily large changes in output, just so long as small changes in input in a different direction make sufficiently small changes in output.
The frobenius norm is nothing complicated, and is really just a way of describing that we square all of the elements in the matrix, take the sum, and then take the square root of this sum.
An alternative definition of the frobenius norm better highlights its connection to the motivation of regularizing the Jacobian frobenius in terms of limiting the extent to which small changes in input can cause large changes in output: The frobenius norm of a matrix J is the root-mean-square of |J(x)| over all unit vectors x.
“Controlling which Everett branch you end up in” is the wrong way to think about decisions, even if many-worlds is true. Brains don’t appear to rely much on quantum randomness, so if you make a certain decision, that probably means that the overwhelming majority of identical copies of you make the same decision. You aren’t controlling which copy you are; you’re controlling what all of the copies do. And even if quantum randomness does end of mattering in decisions, so that a non-trivial proportion of copies of you make different decisions from each other, then you would still presumably want a high proportion of them to make good decisions; you can do your part to bring that about by making good decisions yourself.
Consider reading a real physicist’s take on the issue
This seems phrased to suggest that her view is “the real physicist view” on the multiverse. You could also read what Max Tegmark or David Deutsch, for instance, have to say about multiverse hypotheses and get a “real physicist’s” view from them.
Also, she doesn’t actually say much in that blog post. She points out that when she says that multiverse hypotheses are unscientific, she doesn’t mean that they’re false, so this doesn’t seem especially useful to someone who wants to know whether there actually is a multiverse, or is interested in the consequences thereof. She says “there is no reason to think we live in such multiverses to begin with”, but proponents of multiverse hypotheses have given reasons to support their views, which she doesn’t address.
#1 (at the end) sounds like complexity theory.
Some of what von Neumann says makes it sound like he’s interested in a mathematical foundation for analog computing, which I think has been done by now.
On several occasions, the authors emphasize how the intuitive nature of “effective computability” renders futile any attempt to formalize the thesis. However, I’m rather interested in formalizing intuitive concepts and therefore wondered why this hasn’t been attempted.
Formalizing the intuitive notion of effective computability was exactly what Turing was trying to do when he introduced Turing machines, and Turing’s thesis claims that his attempt was successful. If you come up with a new formalization of effective computability and prove it equivalent to Turing computability, then in order to use this as a proof of Turing’s thesis, you would need to argue that your new formalization is correct. But such an argument would inevitably be informal, since it links a formal concept to an informal concept, and there already have been informal arguments for Turing’s thesis, so I don’t think there is anything really fundamental to be gained from this.
Consider the halting set; … is not enumerable / computable. …Here, we should be careful with how we interpret “information”. After all, coNP-complete problems are trivially Cook reducible to their NP-complete counterparts (e.g., query the oracle and then negate the output), but many believe that there isn’t a corresponding Karp reduction (where we do a polynomial amount of computation before querying the oracle and returning its answer). Since we aren’t considering complexity but instead whether it’s enumerable at all, complementation is fine.
You’re using the word “enumerable” in a nonstandard way here, which might indicate that you’ve missed something (and if not, then perhaps at least this will be useful for someone else reading this). “Enumerable” is not usually used as a synonym for computable. A set is computable if there is a program that determines whether or not its input is in the set. But a set is enumerable if there is a program that halts if its input is in the set, and does not halt otherwise. Every computable set is enumerable (since you can just use the output of the computation to decide whether or not to halt). But the halting set is an example of a set that is enumerable but not computable (it is enumerable because you can just run the program coded by your input, and halt if/when it halts). Enumerable sets are not closed under complementation; in fact, an enumerable set whose complement is enumerable is computable (because you can run the programs for the set and its complement in parallel on the same input; eventually one of them will halt, which will tell you whether or not the input is in the set).
The distinction between Cook and Karp reductions remains meaningful when “polynomial-time” is replaced by “Turing computable” in the definitions. Any set that an enumerable set is Turing-Karp reducible to is also enumerable, but an enumerable set is Turing-Cook reducible to its complement.
The reason “enumerable” is used for this concept is that a set is enumerable iff there is a program computing a sequence that enumerates every element of the set. Given a program that halts on exactly the elements of a given set, you can construct an enumeration of the set by running your program on every input in parallel, and adding an element to the end of your sequence whenever the program halts on that input. Conversely, given an enumeration of a set, you can construct a program that halts on elements of the set by going through the sequence and halting whenever you find your input.
I don’t follow the analogy to 1/x being a partial function that you’re getting at.
Maybe a better way to explain what I’m getting at is that it’s really the same issue that I pointed out for the two-envelopes problem, where you know the amount of money in each envelope is finite, but the uniform distribution up to an infinite surreal would suggest that the probability that the amount of money is finite is infinitesimal. Suppose you say that the size of the ray [0,∞) is an infinite surreal number n. The size of the portion of this ray that is distance at least r from 0 is n−r when r is a positive real, so presumably you would also want this to be so for surreal r. But using, say, r:=√n, every point in [0,∞) is within distance √n of 0, but this rule would say that the measure of the portion of the ray that is farther than √n from 0 is n−√n; that is, almost all of the measure of [0,∞) is concentrated on the empty set.
The latter. It doesn’t even make sense to speak of maximizing the expectation of an unbounded utility function, because unbounded functions don’t even have expectations with respect to all probability distributions.
There is a way out of this that you could take, which is to only insist that the utility function has to have an expectation with respect to probability distributions in some restricted class, if you know your options are all going to be from that restricted class. I don’t find this very satisfying, but it works. And it offers its own solution to Pascal’s mugging, by insisting that any outcome whose utility is on the scale of 3^^^3 has prior probability on the scale of 1/(3^^^3) or lower.
It’s a bad bullet to bite. Its symmetries are essential to what makes Euclidean space interesting.
And here’s another one: are you not bothered by the lack of countable additivity? Suppose you say that the volume of Euclidean space is some surreal number n. Euclidean space is the union of an increasing sequence of balls. The volumes of these balls are all finite, in particular, less than n2, so how can you justify saying that their union has volume greater than n2?
Why? Plain sequences are a perfectly natural object of study. I’ll echo gjm’s criticism that you seem to be trying to “resolve” paradoxes by changing the definitions of the words people use so that they refer to unnatural concepts that have been gerrymandered to fit your solution, while refusing to talk about the natural concepts that people actually care about.
I don’t think think your proposal is a good one for indexed sequences either. It is pretty weird that shifting the indices of your sequence over by 1 could change the size of the sequence.
What about rotations, and the fact that we’re talking about destroying a bunch of symmetry of the plane?
There are measurable sets whose volumes will not be preserved if you try to measure them with surreal numbers. For example, consider [0,∞)⊆R. Say its measure is some infinite surreal number n. The volume-preserving left-shift operation x↦x−1 sends [0,∞) to [−1,∞), which has measure 1+n, since [−1,0) has measure 1. You can do essentially the same thing in higher dimensions, and the shift operation in two dimensions ((x,y)↦(x−1,y)) can be expressed as the composition of two rotations, so rotations can’t be volume-preserving either. And since different rotations will have to fail to preserve volumes in different ways, this will break symmetries of the plane.
I wouldn’t say that volume-preserving transformations fail to preserve volume on non-measurable sets, just that non-measurable sets don’t even have measures that could be preserved or not preserved. Failing to preserve measures of sets that you have assigned measures to is entirely different. Non-measurable sets also don’t arise in mathematical practice; half-spaces do. I’m also skeptical of the existence of non-measurable sets, but the non-existence of non-measurable sets is a far bolder claim than anything else I’ve said here.