It is not at all surprising or interesting that when you make up a problem superficially similar to Newcomb’s paradox but with totally different conditions, that you end up with different outcomes.
My strong expectation is that in Life, there is no configuration that you can put in the starting area that is robust against randomly initializing the larger area.
There are very likely other cellular automata which do support arbitrary computation, but which are much less fragile versus evolution of randomly initialized spaces nearby.
I see no reason why the semantic uniformity property should hold for mathematics. Having sentences of similar form certainly does not require that they have the same semantics even for sentences that have nothing to do with mathematics. English has polysemous words, and the meaning of “exists” in reference to mathematical concepts is different from the meaning of “exists” in reference to physical entities.
If this was for some reason a big problem, then an obvious solution would be to use different words or sentence structure for mathematics. To some degree this is already the usual convention: Mathematics has its own notations, which are translated loosely into English and other natural languages. There are many examples of students taking these vaguer translations overly literally to their own detriment.
A “counterfactual” seems to be just any output of a model given by inputs that were not observed. That is, a counterfactual is conceptually almost identical to a prediction. Even in deterministic universes, being able to make predictions based on incomplete information is likely useful to agents, and ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world.
If we have a model that Omega’s behaviour requires that anyone choosing box B must receive 10 utility, then our counterfactuals (model outputs) should reflect that. We can of course entertain the idea that Omega doesn’t behave according to such a model, because we have more general models that we can specialize. We must have, or we couldn’t make any sense of text such as “let’s suppose Omega is programmed in such a way...”. That sentence in itself establishes a counterfactual (with a sub-model!), since I have no knowledge in reality of anyone named Omega nor of how they are programmed.
We might also have (for some reason) near-certain knowledge that Amy can’t choose box B, but that wasn’t stated as part of the initial scenario. Finding out that Amy in fact chose box A doesn’t utterly erase the ability to employ a model in which Amy chooses box B, and so asking “what would have happened if Amy chose box B” is still a question with a reasonable answer using our knowledge about Omega. A less satisfactory counterfactual question might be “what would happen if Amy chose box A and didn’t receive 5 utility”.
I don’t see any implications for determinism here, or even for complexity.
It’s just a statement that these abstract models (and many others) that we commonly use are not directly extensible into the finer-grained model.
One thing to note is that in both cases, there are alternative abstract models with variables that are fully specified by the finer-grained reality model. They’re just less convenient to use.
There is a serious underlying causal model difference here that cannot be addressed in a purely statistical manner.
The video proposes a model in which some fraction of the population have baseline allergies (yielding a 1% baseline allergic reaction rate over the course of the study), and independently from that the treatment causes allergic reactions in some fraction of people (just over 1% during the same period). If this model is universally correct, then the argument is reasonable.
Do we know that the side effects are independent in this way? It seems to me that we cannot assume this model of independence for every possible combination of treatment and side effect. If there is any positive correlation, then the assumption of independence will yield an extrapolated risk that is too low.
Putting this in terms of causal models, the independence model looks like: Various uncontrolled and unmeasured factors E in the environment sometimes cause some observed side effect S, as observed in the control group. Treatment T also sometimes causes S, regardless of E.
This seems too simplistic. It seems much more likely that E interacts with unknown internal factors I that vary per-person to sometimes cause side effect S in the control group. Treatment T also interacts with I to sometimes produce S.
If you already know that your patient is in a group observed to be at extra risk of S compared with the study population, it is reasonable to infer that this group has a different distribution of I putting them at greater risk of S than the studied group, even if you don’t know how much or what the gears-level mechanism is. Since T also interacts with I, it is reasonable as a general principle to expect a greater chance of T causing the side effect in your patient than independence would indicate.
So the question “shall we count the living or the dead” seems misplaced. The real question is to what degree side effects are expected to have common causes or susceptibilities that depend upon persistent factors.
Allergic reactions in particular are known to be not independent, so an example video based on some other side effect may have been better. I can’t really think of one though, which does sort of undermine the point of the article.
Would you prefer “for nearly all purposes, any bounds there might be are irrelevant”?
Despite the form, statement (b) is not actually a logical conjunction. It is a statement about the collective of both parents.
This becomes clearer if we strengthen the statement slightly to “Alvin’s mother and father are responsible for all of his genome”. It’s much more clear now that it is not a logical conjunction. If it were, it would mean “Alvin’s mother is responsible for all of his genome and Alvin’s father is responsible for all of his genome”, both of which are false.
First I need to ask, what does the phrase “moral world” mean, or more specifically mean to you? Is it much the same concept as a “just world”? I know what is intended by someone who says that an action is moral (though there are major and vigorous disagreements about which actions are moral). I know what is meant by a person (or deity) being moral.
I do not know what is meant by a world being moral. Does it just mean “an absolute standard for morality exists”? I’m not sure how that could conceivably be a property of a world though, regardless of whether that world has a god or not.
Utility functions are especially problematic in modeling behaviour for agents with bounded rationality, or those where there are costs of reasoning. These include every physically realizable agent.
For modelling human behaviour, even considering the ideals of what we would like human behaviour to achieve, there are even worse problems. We can hope that there is some utility function consistent with the behaviour we’re modelling and just ignore cases where there isn’t, but that doesn’t seem satisfactory either.
The way I use “extensibility” here is between two different models of reality, and just means that one can be obtained from the other merely by adding details to it without removing any parts of it. In this case I’m considering two models, both with abstractions such as the idea that “fish” exist as distinct parts of the universe, have definite “weights” that can be “measured”, and so on.
One model is more abstract: there is a “population weight distribution” from which fish weights at some particular time are randomly drawn. This distribution has some free parameters, affected by the history of the tank.
One model is more fine-grained: there are a bunch of individual fish, each with their own weights, presumably determined by their own individual life circumstances. The concept of “population weight distribution” does not exist in the finer-grained model at all. There is no “abstract” population apart from the actual population of 100 fish in the tank.
So yes, in that sense the “population mean” variable does not directly represent anything in the physical world (or at least our finer-grained model of it). This does not make it useless. Its presence in the more abstract model allows us to make predictions about other tanks that we have not yet observed, and the finer-grained model does not.
The main difference between mathematics and most other works of fiction is that mathematics is based on what you can derive when you follow certain sets of rules. The sets of rules are in principle just as arbitrary as any artistic creation, but some are very much more interesting in their own right or useful in the real world than others.
As I see it, the sense in which 2+2 “really” equals 4 is that we agree on a foundational set of definitions and rules taught at a very young age in today’s cultures, following those rules leads to that result, and that such rules have been incredibly useful for thousands of years in nearly every known culture.
There are “mathematical truths” that don’t share this history and aren’t talked about in the same way.
To me it seems more likely that this person is misreporting their motive than that they really oppose this allocation of a patch of grass on biodiversity grounds. I would expect grounds like “I want to use it myself” or slightly more general “it should be available for a wider group” to be very much more common, for example if I had to rank likelihood of motives after hearing that someone objects, but before hearing their reasons. I’d end up with more weight on “playing social games” than on “earnestly believes this”.
On the other hand it would not surprise me very much that at least one person somewhere might truly hold this position. Just my weight for any particular person would be very low.
It seems to me that the positions labelled “definite” vs “indefinite” here are really closer to “self-determination” vs “fatalism”. It may be true that at the scale of societies these are highly predictive of each other, but I’m not sure that they are really interchangeable.
In particular the coin toss society thought experiment wasn’t convincing to me at all.