The usual question to ask is “what has worked before, why, and what salient features can we reuse?”
A few typos: It’s Bekenstein; exp[M×10^−69] should be exp[-M×10^69]
I second that request.
Can you separate the descriptive. e.g. “the behavioral patterns have changed in these ways between 1930 and now: …“, the moral judgment, e.g. “declining”, “poorer”, and the prescriptive, e.g. “we should adapt some of the old patterns to the modern world”?
I still don’t understand the whole deal about counterfactuals, exemplified as “I<strong>f Oswald had not shot Kennedy, then someone else would have<a>“. Maybe MIRI means something else by the counterfactuals?
If it’s the counterfactual conditionals, then the approach is pretty simple, as discussed with jessicata elsewhere: there is the macrostate of the world (i.e. a state known to a specific observer, which consists of many possible substates, or microstates) of the world, one of these microstates led to the observed macroscopic event, some other possible microstates would have led to the same or different possible macrostates, e.g. Oswald shoots Kennedy, Oswald’s gun jams, someone else shooting Kennedy, and so on. The problem is constructing a set of microstates and their probability distribution that together lead to the pre-shooting macrostate. Once you know those, you can predict the odds of each post-shooting-time macrostate. When you think about the problem this way, there are no counterfactuals, only state evolution. It can be applied to the past, to the present or to the future.
I posted about it before, but just to reiterate my question. If you can “simply” count possible (micro-)states and their probabilities, then what is there except this simple counting?
Just to give an example, of, say, the Newcomb’s problem, the pre-decision microstates of the brain of the “agent”, while known to the Predictor, are not known to the agent. Some of these microstates lead to the macrostate corresponding to two-boxing, and some lead to the macrostate corresponding to one-boxing. Knowing what microstates these might be, and assigning our best-guess probabilities to them lets us predict what action an agent would take, if not as perfectly as the Predictor would, then as well as we ever can. What do UDT or FDT say beyond that, or contrary to that?
Funny how this downvoting is basically shooting the messenger. Evan is not the one who called Scott a pseudo-intellectual. If the downvotes are about the discomfort with idea of overtly critiquing SSC, then it is not much better.
Personally, I think Scott is a genius and his posts constantly give me new insights into the world, he is an embodiment of steelmanning uncomfortable views, and is as fair and balanced as any fox might aspire to be. That said, I hope to attend the meetup, mainly to see what arguments some people put forward that make SSC “not worth reading”, which is an extremely high bar to set. I expect their arguments to be more emotional than logical or rational, and it will be fun to see their rationalizations.
Yes, you could derive the horizon stuff from special relativity, but to construct an asymptotically de Sitter spacetime you need general relativity. Anyway, that wasn’t the original issue. “no collapse at intermediate scales is a good hypothesis and maybe wrong for this specific reason” is one possibility, the likelihood of which is currently hard to evaluate, as it extrapolates quantum mechanics far beyond the domain where it had been tested (Zeilinger’s bucky ball double slit experiments). The nature of the apparent collapse is a huge open problem, with decoherence and Zurek’s quantum Darwinism giving some hints at why certain states survive and others don’t, and pretending that MWI somehow dissolves the issue, the way Eliezer tells the tale, is a bit of a delusion. Anyway, MWI does not make any predictions, since it simply tells you that the feeling of being in a single world is an illusion, without going into the details of how to resolve the Wigner’s friend and similar paradoxes. See Scott Aaronson’s lecture 12 on the topic for more discussion.
The argument that no collapse happens at intermediate scales between very small and the entire universe is a symmetry-based argument, just as the argument that things beyond the cosmological horizon still exist is a symmetry-based argument.
Yes, I agree. But to discover and effectively apply symmetry one generally has to have a workable model first. For example, the invariance of the speed of light followed from the Maxwell equations, and was confirmed experimentally, and was incorporated in the math of the Lorentz transformations, yet without a good theory those appeared ugly, not symmetric. It took a new theory to reveal the hidden symmetry. And to eventually write the Maxwell equations in half a line, A=J and divA=0, down from the original 20. Same with the cosmological horizon: it does not appear symmetric and one needs to understand some amount of general relativity to see the symmetry. Or believe those who say that there is one. The “no collapse at the intermediate scales” is a good hypothesis, but quite possible wrong, because gravity is likely to cause decoherence in some way, as Penrose pointed out.
Haven’t read the book, but her blog is one of half a dozen sites I follow regularly. It talks about how pushing for subjective “beauty” over other considerations is not the most useful approach.
Beauty is a hint and an inspiration, not evidence. Sometimes it guides you some place useful, and sometimes it leads you completely astray. Like it has with the String Theory and countless unified field theories.
Most of these ideas are of the type of “what happens to a spaceship when it goes beyond the cosmological horizon?” and the answer is pretty standard: we build models which work well in certain situations and we apply them to all situations where they are reasonably expected to work, even if we sometimes don’t get to see the results first-hand. You can call it parsimony or symmetry, but the order is reversed: you first build a working model, then apply it wherever it makes sense and adjust or replace as needed where it is outside its domain of applicability based on new observations. In the cases where the observations are not available, you take a chance, but generally not a huge one. For example, there might be a topological domain wall just outside the cosmological horizon, but there are no indications of this being the case given what we know about the universe.
No, it cannot. What you are doing in a self-consistent model is something else. As jessicata and I discussed elsewhere on this site, What we observe is a macrostate, and there are many microstates corresponding to the same macrostate. The “different past” means a state of the world in a different microstate than in the past, while in the same macrostate as in the past. So there is no such thing as a counterfactual. the “would have been” means a different microstate. In that sense it is no different from the state observed in present or in the future.
I am one of those who considers Tegmark’s hierarchy a steaming pile of BS that has nothing to do with physics or reality. So I automatically discount any reasoning based on this. The many-worlds direction is a natural way to try to extrapolate quantum mechanics, but so far it has not produced anything consistent, and it is in direct conflict with general relativity, since all those multiple worlds share the same spacetime, yet produce no obvious gravitational effects despite being macroscopic, if not detectable by other means because of the decoherence. So for now it is just a convenient tool for musing about possible worlds while pretending that they are real. That is how Eliezer uses it, anyway. And the flock of his followers who learned about QM from his sequence on the topic.
The anthropic principle is a different beast, and I agree that it has some usefulness, though not nearly as much as its proponents claim, mainly because you cannot usefully talk about probabilities without specifying a probability distribution. But that’s a different topic.
I don’t understand what “the underlying causality I am part of” can possibly mean, since causality is a human way to model observations. This statement seems to use the mind projection fallacy to invert the relationship between map and territory.
untestable statements about reality are totally possible, and can be very action-relevant, for example in making decisions that only have effects after I die
Obviously. There is a good model of what happens after you die. It has been tested many times on other people. This has nothing to do with untestability of interpretations, which all predict the same thing, because they use the same mathematical formalism.
It is possible to form very-likely-true beliefs about many of these statements using considerations such as parsimony and symmetry.
Not really parsimony or symmetry as main considerations. What you use is a model of the world that has been proven reliable in the past. Parsimony and symmetry are just some ideas that were useful in constructing this model. E.g. “when a person dies, the world continues to exist” and “I am a person” are both testable models. Sure, there are models like “I’m a special snowflake”, but they generally don’t survive the contact with observations.
Whatever the actual knowledge representation inside our brains looks like, it doesn’t seem like it can be easily translated into the structure of “hypothesis space, logical relations, degrees of belief.”
That strikes me as the main issue when trying to apply Bayesian logic to real-world problems.
They are called “interpretations” and not “theories” for a reason: they are designed to make no new testable predictions. I don’t know what untestable musings can say about the nature of reality, as opposed to the nature of the person doing the musing.
There are often ways to reframe a research question that feels wrong into one which is at least open and answerable, hopefully before one runs out of grad school time. In this case it could be something like “What changes in the laws of the universe would make moral realism a useful model of the world, one that an AGI would be interested in adopting?”
Funny how most philosophers misunderstand what their job is about. They try answering questions instead of asking or clarifying them, finding a way to ask a question in a way that is answerable by an actual scientist.
Not related to the main body of your post, just to its false premise.
If the many-worlds interpretation of quantum mechanics is correct...
Interpretations by definition make no difference. Eliezer screwed with so many eager rationalist minds by pushing his pet idea, completely unnecessary and even harmful to zen and the art of cultivating useful thinking patterns and raising the sanity waterline. Interpretations are mind projection fallacies.