You make a lot of interesting points, but how do you apply them to the question at hand: what should you have for dinner, and why?
This is a fascinating topic, and I hope it attracts more commentary. As Bentarm says, it is important and relevant to each of us, yet the topic is fraught with uncertainty, and it is expensive to try to reduce the uncertainty.
I do not believe Taubes. No one book can outweigh the millions of pages of scientific research which have led to the current consensus in the field. Taubes is polemical, argumentative, biased, and one-sided in his presentation. He makes no pretense of offering an objective weighing of the evidence for and against various nutritional hypotheses. He is selling a point of view, plain and simple. No doubt he felt such a forceful approach was necessary given the enormous odds he faces in trying to gain a hearing for his ideas. But the fact remains that the reader must keep in mind that he is only hearing one side of the story.
Weighed against Taubes (and others who have advocated similar positions) we must consider the entire scientific establishment, thousands of researchers who dedicate their lives to the pursuit of knowledge. To believe Taubes, we must believe that these people are basing their entire professional careers on a foundation of falsehoods. Worse, from the lack of impact Taubes’ book has had on consensus opinion, we have to imagine that researchers are willfully ignoring the truths that Taubes so convincingly reveals. Nutrition researchers are intentionally lying and covering up the truth in order to protect the false dogma of the field. (Note that this is exactly the same critique of researchers made by global warming skeptics.)
I can’t believe that scientists are so dishonest, or that such a cover-up could be executed successfully. I can’t imagine how any young, budding nutrition researcher could go to work in a post-Taubes world with a clean conscience, if the book is really as convincing as it claims to be.
My conclusion is that to someone intimately acquainted with the field, Taubes’ book is not as persuasive as it appears to the layman.
Now, I will confess that I have some independent reasons to doubt Taubes. But I would prefer not to go into that because IMO the argument I have outlined above is sufficient. Never believe a polemical, one-sided book which has been rejected by the scientific establishment. I offer that as a valid heuristic which has proven correct in the overwhelming majority of cases.
Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.
Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.
I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates—perhaps related to the phenomenon of “violent agreement” we often see.
Strange how this small change to the conditions gives such different results. But it’s a good example of how agreement is inevitable.
I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?
Now if one sees a 1 or 2, he knows the probability is zero. But let’s suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1⁄6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1⁄6 is their shared estimate. After hearing 1⁄6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1⁄4, and this is now common knowledge (why?).
And of course this estimate of 1⁄4 is not what they would come up with if they shared their die values; they would get either 0 or 1.
Let me give an argument in favor of #4, doing what the others do, in the thermometer problem. Now we seem to have them behaving badly. I think in practice many people would in fact look at other thermometers too in making their guesses. So why aren’t they doing it? Two possibilities: they’re stupid; or they have a good reason to do it. An example good reason: some thermometers don’t read properly from a side angle, so although you think you can see and read all of them, you might be wrong. (This could be solved by #3, writing down the average of the cards, but that doesn’t work if everyone tries it since everyone is waiting for everyone else to go first.)
Only if we add a stipulation to this problem, that you are usually right when everyone else is wrong, would it be a good idea to buck the crowd. And even then there is the danger that the others may have some private information that supports their seemingly illogical actions.
Actually if Omega literally materialized out of thin air before me, I would be amazed and consider him a very powerful and perhaps supernatural entity, so would probably pay him just to stay on his good side. Depending on how literally we take the “Omega appears” part of this thought experiment, it may not be as absurd as it seems.
Even if Omega just steps out of a taxi or whatever, some people in some circumstances would pay him. The Jim Carrey movie “Yes Man” is supposedly based on a true story of someone who decided to say yes to everything, and had very good results. Omega would only appear to such people.
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
Like others, I see some ambiguity here. Let me assume that the substrate includes not just the neurons, but the glial and other support cells and structures; and that there needs to be blood or equivalent to supply fuel, energy and other stuff. Then the question is whether this brain as a physical entity can function as the substrate, by itself, for high level mental functions.
I would give this 95%.
That is low for me, a year ago I would probably have said 98 or 99%. But I have been learning more about the nervous system these past few months. The brain’s workings seem sufficiently mysterious and counter-intuitive that I wonder if maybe there is something fundamental we are missing. And I don’t mean consciousness at all, I just mean the brain’s extraordinary speed and robustness.
Another sample problem domain is crossword puzzles:
Don’t stop at the first good answer—You can’t write in the first word that seems to fit, you need to see if it is going to let you build the other words.
Explore multiple approaches simultaneously—Same idea, you often can think of a few different possible words that could work in a particular area of the puzzle, and you need to keep them all in mind as you work to solve the other words.
Trust your intuitions, but don’t waste too much time arguing for them—This one doesn’t apply much because usually people don’t fight over crossword puzzles.
Go meta—This is a big one, because usually crossword puzzles have a theme, often quite subtle, and if you look carefully you can see how your answers are building as part of a whole. This then gives you another direction to get ideas for possible answers, as things that would go with the theme, rather than just taking the clues literally.
Dissolve the question—Well, I don’t know about this, but I suppose if you get frustrated enough you could throw the puzzle into the trash.
Sleep on it—This works well for this kind of puzzle, I find. Coming back to it in the morning you will often make more progress.
Be ready to recognize a good answer when you see it—Once you have enough crossing words in mind you can have good confidence that you are on the right track and go ahead and write those in, even if you don’t have good ideas for some of the linked words. You need to recognize that when enough parts come together and your solution makes them fit, that is a strong clue that you are making progress, even if there are still unanswered aspects.
A perhaps similar example, sometimes I have solved geometry problems (on tests) by using analytical geometry. Transform the problem into algebra by letting point 1 be (x1,y1), point 2 be (x2,y2), etc, get equations for the lines between the points, calculate their points of intersection, and so on. Sometimes this gives the answer with just mechanical application of algebra, no real insight or pattern recognition needed.
I wouldn’t be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn’t be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.
I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.
Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?
Wouldn’t we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn’t stink?
Would we wonder why the part of the brain for hearing high pitches didn’t sound like a high pitch? Why the part which feels a punch in the nose doesn’t actually reach out and punch us in the nose when we lean close?
I can’t help feeling that this line of questioning is bizarre and unproductive.
An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.
I don’t think your question is well represented by the phrase “where is computation”.
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer’s hardware.
For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.
Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don’t. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain’s behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
Thomas Nagel’s classic essay What is it like to be a bat? raises the question of a bat’s qualia:
Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one’s arms, which enables one to fly around at dusk and dawn catching insects in one’s mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one’s feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.
I also wonder whether Deep Blue could be said to possess chess qualia of a type which are similarly inaccessible to us. When we play chess we are somewhat in the position of the man in Searle’s Chinese Room who simulates a Chinese woman. We simulate Deep Blue when we play chess, and our lack of access to any chess qualia no more disproves their existence than the failure of Searle’s man to understand Chinese.
Do you think it will ever be possible to say whether chess qualia exist, and what they are like? Will we ever understand what it is like to be a bat?
A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?
One question on your objections: how would you characterize the state of two human rationalist wannabes who have failed to reach agreement? Would you say that their disagreement is common knowledge, or instead are they uncertain if they have a disagreement?
ISTM that people usually find themselves rather certain that they are in disagreement and that this is common knowledge. Aumann’s theorem seems to forbid this even if we assume that the calculations are intractable.
The rational way to characterize the situation, if in fact intractability is a practical objection, would be that each party says he is unsure of what his opinion should be, because the information is too complex for him to make a decision. If circumstances force him to adopt a belief to act on, maybe it is rational for the two to choose different actions, but they should admit that they do not really have good grounds to assume that their choice is better than the other person’s. Hence they really are not certain that they are in disagreement, in accordance with the theorem. Again this is in striking contrast to actual human behavior even among wannabes.
Try a concrete example: Two dice are thrown, and each agent learns one die’s value. In addition, each learns whether the other die is in the range 1-3 vs 4-6. Now what can we say about the sum of the dice?
Suppose player 1 sees a 2 and learns that player 2′s die is in 1-3. Then he knows that player 2 knows that player 1′s die is in 1-3. It is common knowledge that the sum is in 2-6.
You could graph it by drawing a 6x6 grid and circling the information partition of player 1 in one color, and player 2 in another color. You will find that the meet is a partition of 4 elements, each a 3x3 grid in one of the corners.
In general, anything which is common knowledge will limit the meet—that is, the meet partition the world is in will not extend to include world-states which contradict what is common knowledge. If 2 people disagree about global warming, it is probably common knowledge what the current CO2 level is and what the historical record of that level is. They agree on this data and each knows that the other agrees, etc.
The thrust of the theorem though is not what is common knowledge before, but what is common knowledge after. The claim is that it cannot be common knowledge that the two parties disagree.
Years ago, before coming up with even crazier ideas, Wei Dai invented a concept that I named UDASSA. One way to think of the idea is that the universe actually consists of an infinite number of Universal Turing Machines running all possible programs. Some of these programs “simulate” or even “create” virtual universes with conscious entities in them. We are those entities.
Generally, different programs can produce the same output; and even programs that produce different output can have identical subsets of their output that may include conscious entities. So we live in more than one program’s output. There is no meaning to the question of what program our observable universe is actually running. We are present in the outputs of all programs that can produce our experiences, including the Odin one.
Probability enters the picture if we consider that a UTM program of n bits is being run in 1/2^n of the UTMs (because 1/2^n of all infinite bit strings will start with that n bit string). That means that most of our instances are present in the outputs of relatively short programs. The Odin program is much longer (we will assume) than one without him, so the overwhelming majority of our copies are in universes without Odin. Probabilistically, we can bet that it’s overwhelmingly likely that Odin does not exist.