The metaphor can be made mathematically precise if we first make the analogy between human decision-making and optimization methods like simulated annealing and genetic algorithms. These optimization methods look for a locally optimal solution, but add some sort of “noise” term to try to find a globally optimal solution. So if we suppose that someone who wants to stay in his own local minimum has a lower “noise” temperature than someone who is open-minded, then the metaphor starts to make sense on a much more profound level.
ME3
Eliezer, you are right, what I really meant to say was, once a person finds a locally optimal solution using whatever algorithm, they then have a threshold for changing their mind, and it is that threshold that is similar to temperature.
mitchell: As the Buddhists pointed out a long time ago, the flow of time is actually an illusion. All that you actually experience at any given moment is your present sensory input, plus the memories of the past. But there are any number of experiences involving loss of consciousness that will show that the flow of time as we perceive it is completely subjective (not to say that there is no time “out there,” just that we don’t directly perceive it).
So while I agree that “something is happening,” it does not necessarily consist of one thing after another. Really it’s just another formulation of cogito ergo sum.
This is also relevant in response to Caledonian—the brain does not have to live for any sustained period of time. A Boltzmann brain can pop into existence fully oxygenated with the memories that it is me, typing this response, think about it for a few seconds, and then die of whatever brains die of in interstellar space. From inside the brain, there would be no way to know the difference.
Eliezer: Isn’t it sufficient to say that your brain has an expectation of order because that is how it’s evolved? And what would a brain with no expectation of order even look like? Is it meaningful to talk about a control system that has no model of the outside world?
As I understand it (someone correct me if I’m wrong), there are two problems with the Born rule: 1) It is non-linear, which suggests that it’s not fundamental, since other fundamental laws seem to be linear
2) From my reading of Robin’s article, I gather that the problem with the many-worlds interpretation is: let’s say a world is created for each possible outcome (countable or uncountable). In that case, the vast majority of worlds should end up away from the peaks of the distribution, just because the peaks only occupy a small part of any distribution.
Robin’s solution seems to me equivalent to the Quantum Spaghetti Monster eating the unlikely worlds that we find ourselves not to end up in. The key line is “sudden and thermodynamically irreversible.” Actually, that should be enough to bury the theory since aren’t fundamental physical laws thermodynamically neutral?
We could probably eliminate this distraction of consciousness, couldn’t we? I mean, let’s say that Mathematica version 5000 comes out in a few centuries and in addition to its other symbolic algebra capabilities, it comes with a physical-law-prover: you ask it questions and it sets up experiments to answer those questions. So you ask it about quantum mechanics, it does a bunch of double-slit-experiments in a robotic lab, and gives you the answer, which includes the Born rule. Consciousness was never involved.
Actually it seems to me like this whole business of quantum probabilities is way overrated (for the non-physicist), because it only really manifests itself in cleverly constructed experiments . . . right? I mean, setting aside exactly how Born’s rule derives from the underlying physics, is there any reason to believe that we would learn anything new by finding out?
1) Can someone tell me to what extent this many-worlds interpretation is really accepted? I mean, nobody told me the news that the collapse interpretation was no longer accepted, and I think I read such things in a recent physics textbook. So, can physicists remark on their experience?
2) I think the notion that the QM equations don’t mean anything refers to the fact that nobody knows what the real substrate is in which QM takes place. It’s a bit analogous to the pre-QM situation with light. People asked, what does light travel in? But since nobody was able to identify any substrate for light, they had to treat the wave-like nature of light as simply an empty metaphor. At least, that’s how the classical theory of light was taught to me.
So in the same way, you say that the amplitudes and configurations are the “reality.” But where do the configurations “exist”? Unless you believe that the universe is being simulated in a computer (which seems like a highly unparsimonious not to mention anthropocentric assumption), the equations must be a model of something that’s out there. But it doesn’t seem like we really know anything that the equations are models of.
Seriously, agreeing with Caledonian.
I remember Eliezer wrote an earlier essay to the effect that GR is a really simple theory, in some information-theoretic sense, and therefore we should optimize our theories based on their information-theoretic complexity. But what’s being missed here is that GR (and SR and Newtonian physics and arithmetic . . .) are simple stated on its own terms. That’s WHY it’s a paradigm shift. If you tried to state GR strictly as a modification of Newtonian mechanics in a global coordinate system, you would either fail, or you would end up with something incredibly complex that would appear implausible by information-theoretic counts.
The bits that you fail to count, when looking at a simple theory, are the bits required to represent the entire worldview, which don’t seem like they’re information because they’re just how you look at the world.
What you’re trying to do is find a local optimization in theory-space, but all you’re working with is a projection of theory-space onto the sub-space that is our current way of thinking, and then you find your objective function is not quite zero, but you wave your hands and say, “Hey! It’s lower than what we had before! Why did it take people 30 years to reach this not-quite-minimum when all they had to do was descend the gradient?” I think a lot of people would rather just wait around for someone to come along with an answer that really does minimize the objective function.
Somehow you have to hit upon the right projection of theory-space that happens to include all the right variables. If you have a mistress, I invite you to retire to a cottage with her for a month and see if that helps.
I also think you are taking the MWI vs. Copenhagen too literally. The reason why they are called interpretations is that they don’t literally say anything about the actual underlying wave function. Perhaps, as Goofus in your earlier posts, some physicists have gotten confused and started to think of the interpretations as reality. But the idea that the wave function “collapses” only makes sense as a metaphor to help us understand its behavior. That is all that a theory that makes no predictions can be—a metaphor.
MWI and Copenhagen are different perspectives on the same process. Copenhagen looks at the past behavior of the wave function from the present, and in such cases the wave function behaves AS IF it had previously collapsed. MWI looks at the future behavior of the wave function, where it behaves AS IF it is going to branch. If you look at it that way, the simplest explanation depends on what you are describing: if you are trying to talk about YOUR past history in the wave function, you have no choice but to add in information about each individual branch that was taken from t_0 to t, but if you are talking about the future in general, it is simplest to just include ALL the possible branches.
If you accept that there is no “soul” and your entire consciousness exists only in the physical arrangement of your brain (I more or less believe this), then it would be the height of egotism to require someone to actively preserve your particular brain pattern for an unknown number of years until your body can be reactivated. Simply because better ones are sure to come along in the meantime.
I mean, think about your 70-year-old uncle with his outdated ways of thinking and generally eccentric behavior—now think of a freezer full of 700-year-old uncles who want to be unfrozen as soon as the technology exists, just so they can continue making obnoxious forum posts about how they’re smarter than all scientists on earth. Would you want to unfreeze them, except maybe as historical curiosities?
Nick: Not any more ridiculous than throwing out an old computer or an old car or whatever else. If we dispense with the concept of a soul, then there is really no such thing as death, but just states of activity and inactivity for a particular brain. So if you accept that you are going to be inactive for probably decades, then what makes you think you’re going to be worth reactivating?
Similarly, if the Bayesian answer is difficult to compute, that doesn’t mean that Bayes is inapplicable; it means you don’t know what the Bayesian answer is.
So then what good is this Bayes stuff to us exactly, us of the world where the vast majority of things can’t be computed?
P(A&B)<=P(A), P(A|B)>=P(A)
Isn’t this just ordinary logic? It doesn’t really require all of probability theory. I believe that logic is a fairly uncontroversial element of scientific thought, though of course occasionally misapplied.
First, I think this can be said for any field: the textbooks don’t tell you what you really need to know, because what you really need to know is a state of mind that you can only arrive at on your own.
And there are many scientists who do in fact spend time puzzling over how to distinguish good hypotheses from bad. Some don’t, and they spend their days predicting what the future will be like in 2050. But they need not concern us, because they are just examples of people who are bad at what they do.
There is this famous essay: http://www.quackwatch.com/01QuackeryRelatedTopics/signs.html
And also this one: http://wwwcdf.pd.infn.it/~loreti/science.html
I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.
First of all, good luck getting research funding for such hypotheses (and it wouldn’t be fair to leave out funding from the description of Science if you’re including institutional inertia and bias).
And I think we all know that in general, someone who used this method would never be able to get anywhere in academia, simply because they wouldn’t be respected.
That, I think, teaches an important lesson. Individual scientists are not required to come up with correct or even plausible hypotheses because we all know that individual rationality is flawed. But the aggregate community of scientists and the people who fund them work together to evaluate the plausibility of a given hypothesis, and thereby effectively carry out the Bayesian analysis that Eliezer speaks of.
So one of many thousands of scientists can propose an utterly harebrained theory, and even spend his life on it if he wants, and it will barely register as a blip on the collective scientific radar. But when SR and GR were proposed, it was pretty much taken as a given that they were true, because they HAD to be true. I read somewhere that the experiment done by Eddington to verify the bending of light around the sun was far from accurate enough to actually be a verification of relativity. But it was still taken as a verification, because everyone was pretty much convinced anyway. And conversely, no matter how many experiments the cold fusion people do that show some unexpected effects, nobody takes them very seriously.
Now, you might say that this system is horribly inefficient, and many people say this on a regular basis. But here, the problem is simply that no individual human being can process that much information, and so the time it takes for a given data point to propagate through the community is very long. Of course, the internet helps, and if scientific journals were free, that would probably help also. But ultimately, I think this inefficiency is precisely the cost of a network evaluating all of the priors to find out the plausibility of a theory.
Of course, it also reduces a scientist to nothing more than a cog in a machine, and many people who want to be heroic can’t deal with that. But in real life, no scientist is expected to evaluate his own hypothesis. They are expected to come up with a hypothesis, and try to verify it if they can get funding, and let the community decide to what extent the results are valid.
Apropos of this, the Eliezer-persuading-his-Jailer-to-let-him-out thing was on reddit yesterday. I read through it and today there’s this. Coincidence?
Anyway, I was thinking about the AI Jailer last night, and my thoughts apply to this equally. I am sure Eliezer has thought of this so maybe he has a clear explanation that he can give me: what makes you think there is such a thing as “intelligence” at all? How do we know that what we have is one thing, and not just a bunch of tricks that help us get around in the world?
It seems to me a kind of anthropocentric fallacy, akin to the ancient peoples thinking that the gods were literally giant humans up in the sky. Now we don’t believe that anymore but we still think any superior being must essentially be a giant human, mind-wise.
To give an analogy: imagine a world with no wheels (and maybe no atmosphere so no flight either). The only way to move is through leg-based locomotion. We rank humans in running ability, and some other species fit into this ranking also, but would it make sense to then talk about making an “Artificial Runner” that can out-run all of us, and run to the store to buy us milk? And if the AR is really that fast, how will we control it, given that it can outrun the fastest human runners? Will the AR cause the human species to go extinct by outrunning all the males to mate with the females and replace us with its own offspring?
Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.
Some people can do it without much effort at all, and not all of them are autistic, so you can’t just say that they’ve repurposed part of their brain for arithmetic. Furthermore, other people learn to multiply with less effort through tricks. So, I don’t think it’s really a flaw in our brains, per se.
By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there. I mean, when your response to an AI researcher’s disagreement is “Like, duh! Go read some sci-fi and then we’ll talk!” who is really in the wrong here?
Doesn’t the Lorentz invariant already pretty much take care of the relativity of time? As long as we’re using the Lorentz invariant, we’re free to reparameterize the universe any way we want, and our description will be the same. So I don’t see what this Barbour guy is going on about, it seems like standard physics. Whether you write your function f(x,t) or f(y) where y = g(x,t) or even just f(x) where t = h(x) is totally irrelevant to the universe. It’s just another coordinate transformation just like translating the whole universe by ten meters to the left.
Now, if you have a new invariant to propose, THAT would amount to an actual change in the laws of physics.
But the main thing that’s different about time is that it has a clear direction whereas the space dimensions don’t. This is caused by the fact that the universe started out in a very low-entropy state, and since then has been evolving into higher entropy. I don’t know if it’s even possible to answer the question of why the universe started out the way it did—it’s almost like asking why anything exists at all. But whatever the reason, the universe is very uniform in its space dimensions, but very non-uniform in its time dimension.
iwdw: there has been some thinking about the universe as an actual game of life, Steven Wolfram’s New Kind of Science is the one that comes to mind, but I’m sure there are more reputable sources that he stole the idea from. I believe that this thinking runs into trouble with special relativity.
Speaking of which, has anyone ever attempted to actually model space as a graph of relationships between points, in a computer program? Something like the distance-configuration-space in the last post? It occurs to me that this could actually be a more robust representation for some purposes than just storing the xyz coordinates.
Eliezer: I actually have been getting the insights you speak of repeatedly throughout this series, and it’s one of the reasons why I find it helpful to post comments—because it forces me to think through the ideas well enough to get their occasional mind-bendingness. It’s also why I have continued reading despite all the what-is-Science business.
But I still think that the subjective time-like-ness of time, as well as the concept of causality, are all caused (ha-ha) by the universe starting out in a low-entropy state. So if you had a toy block universe in your hands, you would still see a direction in the block corresponding to time. There is no way to assign a meaningful distance in that direction for the whole universe because of the locality of physics, but the direction is global, isn’t it?
I am also struck by the correlation-vs.-causation issue in the canadian voters study. Moreover, how do we know that the attractiveness rating isn’t actually a reflection of the qualities the voters claim to be looking for? I.e. a more confident, intelligent, eloquent candidate would probably appear more attractive than one who isn’t, all other things being equal.