Well, if you believe in a utilitarian theory of morality, then the most ethical thing to do is to maximize utility (happiness) for everyone, including yourself. So basically, you should have as much fun as you can, except in cases when you could devote that same effort to increase someone else’s happiness by a greater value.
Yosarian2
Interesting question.
I would say that a combination of the first two (Public opinion changes black-box output) and (changes in polls change black box output) are the most likely. I can think of some common political observations that seem to match those theories (for example: a President that wins an election by a large margin is considered to have more “political capital”, it’s easier for him to get things done in Congress in the near future.)
I guess the best way to distinguish between, say, “Public opinion changes cause political change” and “Political change causes public opinion change” is to look at which happens first. Looking at some historical examples, espcally on really big issues that people feel strongly about, it seems that there is a lead time where first people feel strongly about an issue and then, a few years later, the political outcome changes. For example, look at Prohibition. Popular opinion in favor of Prohibition grew for several years before it actually happened (it was a significant part of the progressive movement). Then, after it actually happened and people saw the flaws, popular opinion changed over time, but it took several years before Prohibition became unpopular before it was actually repealed. So there seems to be a lag time in most cases, which implies the direction of causality.
It’s harder to rule out the possibility that there is one third factor that explains both. Perhaps in some cases there is some kind of general culture or media shift that affects both the popular opinion and the politician’s opinion at the same time? Even if that is so, though, it still seems like a system that keeps the attitudes of the population and the end-result of the govenrment in synch is likely to be more stable then one that doesn’t, so that still might not be a bad thing.
Anyway, the point of my post was just that, despite all the flaws of our democracy, that it appears that the attitudes and votes of the population as a whole seem to have a greater influence on the outcome then Eliezer was suggesting.
This kind of thing is often considered one of the main roles of government: funding important projects on a constant basis over a long period of time. It’s hard to fund those with charity; charity funding tends to be inconsistent over time, and people who do give to charity are likely to give to whatever cause is “popular” that year. I wouldn’t want to try to fund “maintenance of one specific bridge every year over the next 50 years” with just charitable contributions from people who use that bridge and benefit from it, because some years they might donate more then enough to maintain the bridge, but some years they might not. Taxation and a constant revenue stream are just a more efficient and consistent solution to the problem. That’s likely true in funding long-term science as well.
In fact, this is a historical theory for how the first large governments formed. In both ancient Egypt and ancient China, the large central governments were able to build massive canal systems that dramatically improved farming and therefore the quality of life of everyone. It would have been pretty unlikely to create those through just the voluntary contribution (either in money or in labor) of people; too many people would have defected to make that a practical solution, at least over the long term.
That being said, if you’re relying on governments for that, there’s always the risk that the government will decide to take your tax money and spend it on something useless (in the case of Egypt, that would be the pyramids), so some kind of auditing of the government’s spending and feedback to improve it is fairly important.
I have a similar, but slightly different theory, based on what I’ve read on neuroscience.
Let’s say you are sitting on a couch, in front of a plate of potato chips.
Several processes in your brain that your conscious mind are not aware of activate, and decide that you want to reach out and eat a potato chip. This happens in an evolutionary very ancient part of your brain
At this point, after your subconscious mind has created this desire but before you actually do it, your conscious mind becomes aware of it. At this point, your conscious mind has some degree of veto power over the decision. (What we usually perceive as “self control”). You may think it’s unhealthy to eat a potato chip right now, and “decide” not to (that is, your conscious mind algorithm overrides your instinctive algorithm.) This “self control” is not total, however; if you are hungry enough, you may not be able to “resist”. Also, if your conscious mind is distracted (say, your are playing a very involving video game), you may eat the chips without really noticing what you are doing.
So, from the point of view of your conscious mind; an idea came from somewhere else to eat chips, then your conscious mind “chose” if you should do it or not.
The “stone the heretic” evolution argument you’re making here doesn’t really seem to work, because that only becomes a possible state of affairs once 95% of a population is already religious, and by that point, whatever genetic code makes it possible for people to be religious is already basically universal in the population. It may fix the genetic code, but it’s not a good hypothesis for how that genotype became universal in the first place.
I could easily come up with more logical hypotheses for a religious mindset being an evolutionary advantage as far as individual fitness goes. For example, perhaps a “religious” mindset is less prone to the “extensional despair” failure mode then a “non-religious” mindset, and perhaps that particular failure mode is detrimental to your chances of surviving adolescence.
Yeah, that’s one common theory for the start of religious belief; basically, that we evolved a natural ability to both try and predict the future, and to predict what other people would do, and that religious thought and religious belief was a side effect of that, especially for dealing with unusual events that weren’t obviously predictable based on what people knew at the time. That’s quite possible.
What Eliezer was talking about is an entirely different theory; the theory that religious belief (or some genetic predisposition to religious belief) was actually itself something that was selected for by evolution; not as a side effect of some other trait, but as something that was directly selected for, something that gave individuals a fitness advantage.
I agree with him that the group selection argument doesn’t really seem to work here, I just don’t think that his theory (the “stone the heretic” hypothesis) makes sense either.
It’s not necessarily an excuse for failure.
If, on some level, you are looking to demonstrate fitness (perhaps as a signaling methods to potential mates), then if you visibly handicap yourself and STILL win, you have demonstrated MORE fitness then if you had won normally. If you expect to win even with the self-handicap, then it’s not just a matter of making excuses.
I think this is similar to how very often a chess master when playing against a weaker player will “give them rook odds”, start with only one rook instead of two. They still expect to win, but they know that if they can still win in that circumstance, then they have demonstrated what a strong player you are.
I think that, to a certain extent, if you know that a large number of people in your tribe have come to a specific conclusion based on available evidence, then even if you haven’t come to that same conclusion, there is a natural tendency to “keep an open mind” about it, to keep a working hypothesis of a model where they are right and you are wrong, and to occasionally test that hypothesis.
Even though you have rationally worked out that there are no ghosts through a process of rational thinking, you do know that a lot of people have come to the conclusions that there are ghosts through non-rational thinking, and there is some non-zero chance that somehow their methods of reasoning might turn out to be correct. That is, there is always a possibility that in some fundamental way your entire method of looking at the universe and analyzing data might be somehow flawed in a way theirs is not.
It’s hard to ever set a precise probability value on that, although it seems very unlikely, but your survival instincts are keyed to avoid low-possibility danger events (in a way that your resource-gathering instincts are not). It’s fundamentally much easier to knowingly gamble with your money then to knowingly gamble with your life, in other words.
Or, alternately, they know that they will feel better after the test if the test tells them that they have a healthier heart, so they act in such a way as to get that internal reward.
Human reward mechanisms just aren’t set up properly; in this case, the internal reward isn’t actually for living longer, it’s for passing the test, so you try to pass the test. In some ways, it’s similar to a standardized test in school; your actual aptitude or intelligence doesn’t change based on how hard you try on the test, but you try hard anyway, because your reward mechanisms aren’t based on your intelligence per se, they’re based on the test score, so that’s what you try to maximize.
They’re probably both actual altruists.
The second guy is just an altruist who doesn’t trust that everyone else is altruistic, so he’s trying to convince selfish people to be altruistic using selfish logic.
I actually wanted to bet about $250.00 that Obama was going to win in 2012 on Intatrade, because I thought Nate Silver’s models were more accurate then the market, but my wife really didn’t want me to, heh. Still, there was a pretty big gap between the market (60% Obama or so) and the mathematical models (Nate Silver had Obama at over 90%; some other models were even higher).
On a more general note, I don’t think the prediction markets in general were that accurate in 2012, because A: a lot of people were relying on provably false information, and B: I think a lot of people invested in the markets, not because they thought they knew the result, but because they wanted to manipulate the result (if the Intatrade market showed a close race, then the media might report on that, which might influence people’s minds, ect).
Ah; I had missed that news story, that’s a shame.
A Priori has always just seemed to me like another way to describe what we call “assumption” in classical logic. You can’t deduce anything in classical logic without starting from certain assumptions and seeing what you can deduce from them, and one of the strengths of classical logic is that it forces you to actually list your assumptions up front, so someone else can say “I agree with your reasoning, but I think your assumption “B” is invalid”.
Trying to take assumptions apart, see if they are valid, see if they can either be proven inductively from evidence or deductively from other assumptions, and trying to figure out where that specific assumption comes from, is a very valid thing to do (hitting the “explain” button), but on some level, I think you are always going to have to have some assumptions in order to use any logical system (either classic logic, or Bayesian reasoning).
This sounds like a “tree falling in the woods” type argument, at least the way you have it laid our here. They using the word “want” to mean fundamentally different things. Subhan is using “want” to include all mental processes that encourage you to behave in a certain way, which I think is a categorization error that is causing him to come to wrong conclusions.
Well, if you wanted to actually test Occam’s razor in a scientific way, you would have to test it against an alternate hypothesis and see which one gave better predictions, wouldn’t you?
So how about this as an alternate hypothesis:
“Occam’s Razor has no objective truth value; there is no fundamental reason that the truth is more likely to be a simpler explanation. It only SEEMS like Occam’s Razor is true because it is exponentially harder to find a valid explanation in a larger truth-space, so usually when we do manage to find a valid explanation for something, it is a simple explanation. But that is merely a question of the map, and of finding a specific spot on the map, not of the territory itself.”
What kind of experiment would you set up to differentiate that possibility from Occam’s Razor being correct?
Well, let’s see. If you take an unsolved scientific question, and create a hypothesis that is the simplest possible fit for that answer given the known facts, how often is that answer true, and how often is a more complex answer actually true?
I’m sure we can all think of examples and counterexamples (Newton’s theories are a lot simpler fit for the facts he knew then relativity, but they turned out to not be true), but you would probably have to take a large sample of scientific problems.
I would think that Occam’s razor would turn out to be correct most of the time in a statistical analysis, but it seems like a testable hypothesis, at least in the set of (problems human scientists have solved).
I will admit that I’m struggling a bit here, because I’m having trouble coming up with a coherent mental picture of what a legitimate alternate hypothesis to Occam’s razor would actually look like.
In fact, if you take my hypothesis to be true, then Occam’s razor would still fundamentally hold, at least in the simplest form of “a less complicated theory is more likely to be true then a more complicated”, since if “theory-space A” is smaller then “theory-space B”, then any given point in theory-space A is more likely to be true then any given point in theory-space B even if the answer has an equal chance of being in space A as it does of being in space B. So I think my original hypothesis actually itself reduces to Occam’s Razor.
I think this is where I just say oops and drop this whole train of thought.
Yes, this is what I was going to say.
As time goes on, we seem to add more and more people and groups of people to the catagory we treat morally. Most major changes in morality over time you could describe this way; the elimination of slavery, women’s suffrage, laws of war, better treatment for mental illness, even the idea that it’s bad to torture cats for your own amusement could all be called “expanding the group of beings who we feel we have to treat ethically”.
To explain Gandhi, and altruistic behavior in general beyond what makes sense from an evolutionary standpoint, I would say that we have taken the tools that evolution gave us for very different things (including friendship, love, empathy, and so on) and fundamentally re-purposed them. Our brain, our hardware, is a product of evolution, but our brain has been shown to be very highly plastic and flexible, especially when we are young. Our culture, or education, and our upbringing is a big part of the software we are running on our brain and, to a large extent, we write our own software; not individually, necessarily, but on a cultural level over time.
We have spent a great deal of time and energy over the millennium to try to re-write our cultural software in ways to encourage altruism and good behavior from all. We spend a lot of time trying to instil this in our children, in each other, and in ourselves. Is it really a surprise if we have to some extent succeeded? If you spend a lot of time and energy doing a certain type of thinking, then that part of your brain will tend to develop more connections and become more developed, that has been proven experimentally.
This was a really thought provoking article.
I will say that I think, in practice, the voter seems to have a larger impact on the outcomes of the democratic process then you would assume.
My hypothesis would be that if you temporally ignore all the ugliness and irrationality of the democratic process itself, and just consider it a big black box, that there is a high correlation between what voters as a whole want on any single narrow issue, and what the outcome of the black box produces. There is a significant lag time, but it does seem that if a majority of voters strongly want for something to change, and they are consistent about it for a number of years, that that outcome will change.
Just for a quick example, if you look at how voters views on gay rights in general and gay marriage have evolved over the past 20 years or so, you can see that the government’s policies on the subject seem to follow the voter’s will. 4 years ago, most voters were opposed to gay marriage, and so were most politicians (including President Obama). Now, a majority of voters are in favor of gay marriage, and it is happening in more states, and even Obama himself has changed his view on the issue. Other issues, like the legalization of marijuana, have gone through a similar trajectory.
You can go state-by-state as well on this. Take a contentious issue like gun control, and go across the board; states where the voters are generally more in favor of gun control tend to have more gun control (California, for example); states where voters are less in favor of gun control tend to have less gun control. To a significant extent, the output of the black box (the government policies) tends in the long run to mirror the inputs (voter opinions).
The exception is on big meta-issues, like “the size of government”. The problem there is that while polls consistently say that people want a smaller government, whenever you ask them about specific issues (“should we cut the military.” “Should we cut medicare.” “Should we cut social security.”) people consistently say “no, we shouldn’t”. So, from the irrationality of the voters, we get politicians who mirror them and say that they want a smaller government while refusing to cut any specific unpopular government programs.