Ontology and Epistemology: Philosophy is very much a matter of opinion. But that doesn’t mean that one philosophical position is as good as another. Some are better than others—but not because they are better supported by reasons. “Support” has nothing whatever to do with the goodness of a philosophical position. Instead, positions should be judged by their merits as a vantage point. Philosophers shouldn’t seek positions based on truth, they should seek positions based on fruitfulness.
Language and Logic: Bertrand Russell and W.V.O. Quine have much to answer for. Michael Dummett, Saul Kripke, and David Lewis have repaired some of the damage. Eliezer’s naive realism and over-enthusiasm for reductionism set my teeth on edge. We need a more pluralist account of theories, models, and science. Type theory and constructivism are the wave of the future in foundational mathematics.
Ethics: Ethics answers the question “What actions deserve approval or disapproval?” (Not the question “What ought I to do?”) The question is answered by Nash(1953) in the two-person case. Actions that do not at least conform to the (unique, correct) bargain deserve disapproval and punishment. Notice that the question presumes rational agents with perfect information—that is why what you ought (ethically) to do may sometimes differ from what you ought (practically) to do. Future utilities should be discounted at a rate of at least 1% per year (50% per lifetime).
Rationality and decision theory. I suspect that TDT/UDT are going in the wrong direction, but perhaps I just don’t understand them yet. The most fruitful issues for research here seem to be in modeling agents as coalitions of subagents, creating models in which it can be rational to change one’s own utility function, biology-inspired modeling using Price equations and Hamilton’s rule, and the question of coalition formation and structure in multi-agent Nash bargaining. Oh yeah: And rational creation/destruction of one agent by another.
Futurism: The biggest existential risk facing mankind is uFAI. Trying to build a FOOMing AI which has the fixed goal of advancing extrapolated human values seems incredibly dangerous to me. Instead, I think we should try to avoid a singleton and do all we can to prevent the creation of AIs with long-term goals. But at this stage, that is just a guess. There is a 50%+ probability, though, that there is no big AI risk at all, and that super-intelligences will not be all that much more powerful than smart people and organizations.
Note: posting this feels very self-indulgent. Though I do see value in setting it down as a milestone for later comparison.
I’m curious to know what you mean by saying that philosophy is a matter of opinion. From your paragraph, it appears you would say that even the most highly confirmed and productive theories of physics and chemistry are also matters of “opinion.” For me, that’s an odd way to use the term “opinion”, but unfortunately, I don’t own a trademark on the term! :)
I don’t think you have completely misunderstood. It is certainly possible to think of General Relativity or Molecular Orbital Theory as simply a very “fruitful vantage point”. But that is not really what I intended. After all, the final arbiter of the goodness of these scientific theories is experiment.
The same cannot be said about philosophical positions in metaphysics, ontology, metaethics, etc. There is no experiment which can confirm or refute the statements of Quine, or Kripke, or Dummett, or Chalmers, or Dennett. IMHO, it is useless to judge based on who has exhibited the best arguments. Instead, I believe, you need to try to understand their systems, to see the world through their eyes, and then to judge whether doing so makes things seem clearer.
Most philosophy is ‘a matter of opinion’ simply because there are no experiments to appeal to, and no proofs to analyze. But it is not completely meaningless, either, even though you cannot ‘pay the rent in expected experience’. Because you can sometimes ‘pay the rent’ in insight gained.
I guess, then, the reason I have difficulty understanding your position is that I don’t see a sharp distinction between science and philosophy, for standard Quinean reasons. The kind of philosophy that interests me is very much dependent on experiment. For example, my own meta-ethical views consist of a list of factual propositions that are amenable to experiment. (I’ve started listing them here.)
But of course, a great deal of philosophy is purely analytic, like mathematics. Surely the theorems of various logics are not mere opinion?
As for those (admittedly numerous) synthetic claims for which decent evidence is unavailable, I’m not much interested in them, either. Perhaps this is the subset of philosophical claims you consider to be “opinion”? Even then, I think the word “opinion” is misleading. This class of claims contains many that are either confused and incoherent, or else coherent and factual but probably unknowable.
My Web of Opinions, FWIW:
Ontology and Epistemology: Philosophy is very much a matter of opinion. But that doesn’t mean that one philosophical position is as good as another. Some are better than others—but not because they are better supported by reasons. “Support” has nothing whatever to do with the goodness of a philosophical position. Instead, positions should be judged by their merits as a vantage point. Philosophers shouldn’t seek positions based on truth, they should seek positions based on fruitfulness.
Language and Logic: Bertrand Russell and W.V.O. Quine have much to answer for. Michael Dummett, Saul Kripke, and David Lewis have repaired some of the damage. Eliezer’s naive realism and over-enthusiasm for reductionism set my teeth on edge. We need a more pluralist account of theories, models, and science. Type theory and constructivism are the wave of the future in foundational mathematics.
Ethics: Ethics answers the question “What actions deserve approval or disapproval?” (Not the question “What ought I to do?”) The question is answered by Nash(1953) in the two-person case. Actions that do not at least conform to the (unique, correct) bargain deserve disapproval and punishment. Notice that the question presumes rational agents with perfect information—that is why what you ought (ethically) to do may sometimes differ from what you ought (practically) to do. Future utilities should be discounted at a rate of at least 1% per year (50% per lifetime).
Rationality and decision theory. I suspect that TDT/UDT are going in the wrong direction, but perhaps I just don’t understand them yet. The most fruitful issues for research here seem to be in modeling agents as coalitions of subagents, creating models in which it can be rational to change one’s own utility function, biology-inspired modeling using Price equations and Hamilton’s rule, and the question of coalition formation and structure in multi-agent Nash bargaining. Oh yeah: And rational creation/destruction of one agent by another.
Futurism: The biggest existential risk facing mankind is uFAI. Trying to build a FOOMing AI which has the fixed goal of advancing extrapolated human values seems incredibly dangerous to me. Instead, I think we should try to avoid a singleton and do all we can to prevent the creation of AIs with long-term goals. But at this stage, that is just a guess. There is a 50%+ probability, though, that there is no big AI risk at all, and that super-intelligences will not be all that much more powerful than smart people and organizations.
Note: posting this feels very self-indulgent. Though I do see value in setting it down as a milestone for later comparison.
Perplexed,
I’m curious to know what you mean by saying that philosophy is a matter of opinion. From your paragraph, it appears you would say that even the most highly confirmed and productive theories of physics and chemistry are also matters of “opinion.” For me, that’s an odd way to use the term “opinion”, but unfortunately, I don’t own a trademark on the term! :)
Have I understood you correctly?
I don’t think you have completely misunderstood. It is certainly possible to think of General Relativity or Molecular Orbital Theory as simply a very “fruitful vantage point”. But that is not really what I intended. After all, the final arbiter of the goodness of these scientific theories is experiment.
The same cannot be said about philosophical positions in metaphysics, ontology, metaethics, etc. There is no experiment which can confirm or refute the statements of Quine, or Kripke, or Dummett, or Chalmers, or Dennett. IMHO, it is useless to judge based on who has exhibited the best arguments. Instead, I believe, you need to try to understand their systems, to see the world through their eyes, and then to judge whether doing so makes things seem clearer.
Most philosophy is ‘a matter of opinion’ simply because there are no experiments to appeal to, and no proofs to analyze. But it is not completely meaningless, either, even though you cannot ‘pay the rent in expected experience’. Because you can sometimes ‘pay the rent’ in insight gained.
I guess, then, the reason I have difficulty understanding your position is that I don’t see a sharp distinction between science and philosophy, for standard Quinean reasons. The kind of philosophy that interests me is very much dependent on experiment. For example, my own meta-ethical views consist of a list of factual propositions that are amenable to experiment. (I’ve started listing them here.)
But of course, a great deal of philosophy is purely analytic, like mathematics. Surely the theorems of various logics are not mere opinion?
As for those (admittedly numerous) synthetic claims for which decent evidence is unavailable, I’m not much interested in them, either. Perhaps this is the subset of philosophical claims you consider to be “opinion”? Even then, I think the word “opinion” is misleading. This class of claims contains many that are either confused and incoherent, or else coherent and factual but probably unknowable.