Okay, so what have they done that I would consider cognitive philosophy? It doesn’t matter how many verbal-type non-dissolved questions we agree on apart from that. I’m taking free will as an exemplar and saying, “But it’s all like that, so far as I’ve been able to tell.”
It doesn’t matter how many verbal-type non-dissolved questions we agree on apart from that. I’m taking free will as an exemplar and saying, “But it’s all like that, so far as I’ve been able to tell.”
I’m not sure what you mean by this. Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn’t hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?
I suspect that’s not what you’re saying, but then… what do you think it was that I was claiming in the first place?
Or, another way to put it: Which sentence of my original article are you disagreeing with? Do you disagree with my claim that “standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century”? Or perhaps you disagree with my claim that “Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science—a movement that has been active for at least two decades”? Or perhaps you disagree with my claim that “Rationalists need not dismiss or avoid philosophy”?
I wonder if you agree with gjm’s suggestion that “LW-style philosophy takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of.” That’s roughly what I said above, though of course I’ll point out that lots of Quinean naturalists have taken Quinean naturalism as a starting point and done things that nobody else thought of. That’s just what it means to make original contributions in the movement.
I’ll be happy to provide examples of “cognitive philosophy” once I’ve got this above bit cleared up. I’ve given examples before (Schroeder 2004; Bishop & Trout 2004; Bickle 2003), but of course I could give more detail.
Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn’t hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?
I’m saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant, thus, presenting the long list of basic assumptions on which LW-style and Quinean naturalism agree is from my perspective irrelevant.
Do you disagree with my claim that “standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century”?
Yes. What I would consider “standard LW positions” is not “there is no libertarian free will” but rather “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. If the latter has been a standard position then I would be quite interested.
Or perhaps you disagree with my claim that “Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science—a movement that has been active for at least two decades”?
The kind of reforms you quote are extremely basic, along the lines of “OMG there are cognitive biases and they affect philosophers!” not “This is how this specific algorithm generates the following philosophical debate...” If the movement hasn’t progressed to the second stage, then there seems little point in aspiring LW rationalists reading about it.
GJM’s suggestion is correct but the thing which you seem to deny and which I think is true is that LW is at a different stage of doing this sort of philosophy than any Quinean naturalism I have heard of, so that the other Quineans “doing things that nobody else have thought of” don’t seem to be doing commensurate work.
I am not asking for an example of someone who agrees with me that, sure, cognitive philosophy sounds like a great idea, by golly. There’s a difference between saying “Sure, evolution is true!” and doing evolutionary biology.
I’m asking for someone who’s dissolved a philosophical question into a cognitive algorithm, preferably in a way not previously seen before on LW.
Did you read the LW sequence on free will, both the setup and the solution? Apologies if you’ve already previously answered this question, I have a vague feeling that I asked you before and you said yes, but still, just checking.
On the whole, you seem to think that I should be really enthusiastic about finding philosophers who agree with my basic assumptions, because here are these possible valuable allies in academia—why, if we could reframe LW as Quineanism, we’d have a whole support base ready-made!
Whereas I’m thinking, “If you ask what sort of activity these people perform in their daily work, their skills are similar to those of other philosophers and unlike those of people trying to figure out what algorithm a brain is running” and so they can’t be hired to do the sort of work we need without extensive retraining; and since we’re not out to reform academic philosophy, per se, it’s not clear that we need allies in a fight we could just bypass.
I’m saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant...
Well, it’s important to my claim that LW-style philosophy fits into the category of Quinean naturalism, which I think is undeniable. You may think Quinean naturalism is obvious, but well… that’s what makes you a Quinean naturalist. Part of the purpose of my post is to place LW-style philosophy in the context of mainstream philosophy, and my list of shared assumptions between LW-style philosophy and Quinean philosophy does just that. That goal by itself wasn’t meant to be very important. But I think it’s a categorization that cuts reality near enough the joints to be useful.
What I would consider “standard LW positions” is not “there is no libertarian free will” but rather “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. If the latter has been a standard position then I would be quite interested.
Then we are using the word “standard” in different ways. If I were to ask most people to list some “standard LW positions”, I’m pretty sure they would list things like reductionism, empiricism, the rejection of libertarian free will, atheism, the centrality of cognitive science to epistemology, and so on—long before they list anything like “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. I’m not even sure how much consensus that enjoys on Less Wrong. I doubt it is as much a ‘standard’ position on Less Wrong than the other things I mentioned.
But I’m not here to argue about the meaning of the word standard.
Disagreement: dissolved.
Moving on: Yes, I read the free will stuff. ‘How an Algorithm Feels from the Inside’ is one of my all-time favorite Yudkowsky posts.
I’ll have to be more clear on what you think LW is doing that Quinean naturalists are not doing. But really, I don’t even need to wait for that to respond. Even work by philosophers who are not Quinean naturalists can be useful in your very particular line of work—for example in clearing up your CEV article’s conflation of “extrapolating” from means to ends and “extrapolating” from current ends to new ends after reflective equilibrium and other processes have taken place.
Finally, you say that if Quinean naturalism hasn’t progressed from recognizing that biases affect philosophers to showing how a specific algorithm generates a philosophical debate then “there seems little point in aspiring LW rationalists reading about it.”
This claim is, I think, both clearly false as stated and misrepresents the state of Quinean naturalism.
First, on falsity: There are many other useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates. Since your own philosophical work on Less Wrong has considered far more than just this, I assume you agree. Thus, it is not the case that Quinean naturalists aren’t doing useful work unless they are discovering the cognitive algorithms that generate philosophical debates.
Second, on misrepresentation: Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it. Moreover, Quinean naturalists do sometimes discuss how cognitive algorithms generate philosophical debates. See, for example, Eric Schwitzgebel’srecent work on how introspection works and why it generates philosophical confusions.
It seems you’re not just resisting the classification of LW-style philosophy within the broader category of Quinean naturalism. You’re also resisting the whole idea of seeing value in what mainstream naturalistic philosophers are doing, which I don’t get. How do you think that thought got generated? Reading too much modal logic and not enough Dennett / Bickle / Bishop / Metzinger / Lokhorst / Thagard?
I’m not even trying to say that Eliezer Yudkowsky should read more naturalistic philosophy. I suspect that’s not the best use of your time, especially given your strong aversion to it. But I am saying that the mainstream community has useful insights and clarifications and progress to contribute. You’ve already drawn heavily from the basic insights of Quinean naturalism, whether or not you got them from Quine himself. And you’ve drawn from some of the more advanced insights of people like Judea Pearl and Nick Bostrom.
So I guess I just don’t get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don’t understand where that aversion is coming from. If you’re not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it’s a bad idea to read through it for insights?
I’m quite sure they do. Right now I can’t think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is. :)
But that doesn’t mean that mainstream philosophers never make useful and original contributions on all kinds of subjects relevant to Less Wrong and even to friendly AI.
Right now I can’t think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is.
That (Jaynes) is a pretty high standard. But not impossibly high. As candidates, I would mention Jaakko Hintikka, Per Martin-Lof and the late David Lewis. If you are allowed to count economists, then I would also mention game theorists like Aumann, Binmore, and the late John Harsanyi. And if you allow philosophically inclined physicists like Jaynes, there are quite a few folks worth mentioning.
If so, I don’t think he can maintain that position consistently, since he has already benefited from the work of many mainstream philosophers, and continues to do so—for example Bostrom on anthropic reasoning.
So I guess I just don’t get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don’t understand where that aversion is coming from.
Actually an expectation that studying this philosophy stuff would be of no use (or even can harm you), which is a more reflectively reliable judgment than mere emotional aversion. Might be incorrect, but can’t be influenced by arguing that aversion is irrelevant (not that you do argue this way, but summarizing the position with use of that word suggests doing that).
Yes, that’s what most Quinean naturalists are doing...
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I’m also still curious to hear your response to the specific example I’ve now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the “extrapolation” bit).
As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What’s the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don’t know.
But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the “standard” (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.
So… it’s not always bad. :)
Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism?
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the ‘biases’ for effective branch processing are either disabled or worst of all negated, before a substitute is devised.
If you want to form a belief about, for example, FAI, it’s all nice that you don’t feel that the morality can result from some simple principles. If you want to build FAI—this branch (the generated morality that we agree with) is much much lower while it’s probability of success, really, isn’t that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there’s still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That’s what destroys corporations after they sink costs. They don’t even pursue cost-recovery option when it doesn’t coincide with prior effort and only utilizes part of prior effort.
There are many other [Edit: was originally ‘more’] useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates.
Perhaps. But it is difficult to imagine any less complete problem dissolution being successful at actually shutting down that confused philosophical debate, and thus freeing those first-class minds to actually do those hypothetical useful things.
BTW, by “more” I meant “additional”: I meant that there “are many other useful things for philosophers… to be doing...” I’ve now clarified the wording in the original comment.
It might be useful, if only for gaining status and attention and funding, to connect your work directly to one or several academic fields. To present it as a synthesis of philosophy, computer science, and cognitive science (or some other combination of your choice.) When people ask me what LessWrong is, I generally say something like “It’s philosophy from a computer scientist’s perspective.” Most people can only put a mental label on something when they have a rough idea of what it’s like, and it’s not practical to say, “Well, our work isn’t like anything.”
That doesn’t mean you have to hire philosophers or join a philosophy department; it might not mean that you, personally, have to do anything. But I do think that more people would be interested, and have a smaller inferential distance, if LW ideas were generally presented as related to other disciplines.
Expanding on this, which section of my local Barnes And Noble is your (Eliezer) book going to be in? Philosophy seems like the best fit (aside from the best selling non-fiction) to get new interested readership.
Amazon’s “Books > Nonfiction > Social Sciences” contains things like Malcolm Gladwell and Predictably Irrational, which I think is the audience that Eliezer is targeting.
Just taking the example I happen to know about, Sarah-Jane Leslie works on the meaning of generics. (What do we mean when we say “Tigers have stripes” ? All tigers? Most tigers? Normal tigers? But then how do we account for true statements like “Tigers eat people” when most tigers don’t eat people, or “Peacocks have colorful tails” when female peacocks don’t have colorful tails?) She answers this question directly using evidence from cognitive science. I think it counts as question-dissolving.
Okay, so what have they done that I would consider cognitive philosophy? It doesn’t matter how many verbal-type non-dissolved questions we agree on apart from that. I’m taking free will as an exemplar and saying, “But it’s all like that, so far as I’ve been able to tell.”
I’m not sure what you mean by this. Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn’t hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?
I suspect that’s not what you’re saying, but then… what do you think it was that I was claiming in the first place?
Or, another way to put it: Which sentence of my original article are you disagreeing with? Do you disagree with my claim that “standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century”? Or perhaps you disagree with my claim that “Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science—a movement that has been active for at least two decades”? Or perhaps you disagree with my claim that “Rationalists need not dismiss or avoid philosophy”?
I wonder if you agree with gjm’s suggestion that “LW-style philosophy takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of.” That’s roughly what I said above, though of course I’ll point out that lots of Quinean naturalists have taken Quinean naturalism as a starting point and done things that nobody else thought of. That’s just what it means to make original contributions in the movement.
I’ll be happy to provide examples of “cognitive philosophy” once I’ve got this above bit cleared up. I’ve given examples before (Schroeder 2004; Bishop & Trout 2004; Bickle 2003), but of course I could give more detail.
I’m saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant, thus, presenting the long list of basic assumptions on which LW-style and Quinean naturalism agree is from my perspective irrelevant.
Yes. What I would consider “standard LW positions” is not “there is no libertarian free will” but rather “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. If the latter has been a standard position then I would be quite interested.
The kind of reforms you quote are extremely basic, along the lines of “OMG there are cognitive biases and they affect philosophers!” not “This is how this specific algorithm generates the following philosophical debate...” If the movement hasn’t progressed to the second stage, then there seems little point in aspiring LW rationalists reading about it.
GJM’s suggestion is correct but the thing which you seem to deny and which I think is true is that LW is at a different stage of doing this sort of philosophy than any Quinean naturalism I have heard of, so that the other Quineans “doing things that nobody else have thought of” don’t seem to be doing commensurate work.
I am not asking for an example of someone who agrees with me that, sure, cognitive philosophy sounds like a great idea, by golly. There’s a difference between saying “Sure, evolution is true!” and doing evolutionary biology.
I’m asking for someone who’s dissolved a philosophical question into a cognitive algorithm, preferably in a way not previously seen before on LW.
Did you read the LW sequence on free will, both the setup and the solution? Apologies if you’ve already previously answered this question, I have a vague feeling that I asked you before and you said yes, but still, just checking.
On the whole, you seem to think that I should be really enthusiastic about finding philosophers who agree with my basic assumptions, because here are these possible valuable allies in academia—why, if we could reframe LW as Quineanism, we’d have a whole support base ready-made!
Whereas I’m thinking, “If you ask what sort of activity these people perform in their daily work, their skills are similar to those of other philosophers and unlike those of people trying to figure out what algorithm a brain is running” and so they can’t be hired to do the sort of work we need without extensive retraining; and since we’re not out to reform academic philosophy, per se, it’s not clear that we need allies in a fight we could just bypass.
Well, it’s important to my claim that LW-style philosophy fits into the category of Quinean naturalism, which I think is undeniable. You may think Quinean naturalism is obvious, but well… that’s what makes you a Quinean naturalist. Part of the purpose of my post is to place LW-style philosophy in the context of mainstream philosophy, and my list of shared assumptions between LW-style philosophy and Quinean philosophy does just that. That goal by itself wasn’t meant to be very important. But I think it’s a categorization that cuts reality near enough the joints to be useful.
Then we are using the word “standard” in different ways. If I were to ask most people to list some “standard LW positions”, I’m pretty sure they would list things like reductionism, empiricism, the rejection of libertarian free will, atheism, the centrality of cognitive science to epistemology, and so on—long before they list anything like “the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z”. I’m not even sure how much consensus that enjoys on Less Wrong. I doubt it is as much a ‘standard’ position on Less Wrong than the other things I mentioned.
But I’m not here to argue about the meaning of the word standard.
Disagreement: dissolved.
Moving on: Yes, I read the free will stuff. ‘How an Algorithm Feels from the Inside’ is one of my all-time favorite Yudkowsky posts.
I’ll have to be more clear on what you think LW is doing that Quinean naturalists are not doing. But really, I don’t even need to wait for that to respond. Even work by philosophers who are not Quinean naturalists can be useful in your very particular line of work—for example in clearing up your CEV article’s conflation of “extrapolating” from means to ends and “extrapolating” from current ends to new ends after reflective equilibrium and other processes have taken place.
Finally, you say that if Quinean naturalism hasn’t progressed from recognizing that biases affect philosophers to showing how a specific algorithm generates a philosophical debate then “there seems little point in aspiring LW rationalists reading about it.”
This claim is, I think, both clearly false as stated and misrepresents the state of Quinean naturalism.
First, on falsity: There are many other useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates. Since your own philosophical work on Less Wrong has considered far more than just this, I assume you agree. Thus, it is not the case that Quinean naturalists aren’t doing useful work unless they are discovering the cognitive algorithms that generate philosophical debates.
Second, on misrepresentation: Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it. Moreover, Quinean naturalists do sometimes discuss how cognitive algorithms generate philosophical debates. See, for example, Eric Schwitzgebel’s recent work on how introspection works and why it generates philosophical confusions.
It seems you’re not just resisting the classification of LW-style philosophy within the broader category of Quinean naturalism. You’re also resisting the whole idea of seeing value in what mainstream naturalistic philosophers are doing, which I don’t get. How do you think that thought got generated? Reading too much modal logic and not enough Dennett / Bickle / Bishop / Metzinger / Lokhorst / Thagard?
I’m not even trying to say that Eliezer Yudkowsky should read more naturalistic philosophy. I suspect that’s not the best use of your time, especially given your strong aversion to it. But I am saying that the mainstream community has useful insights and clarifications and progress to contribute. You’ve already drawn heavily from the basic insights of Quinean naturalism, whether or not you got them from Quine himself. And you’ve drawn from some of the more advanced insights of people like Judea Pearl and Nick Bostrom.
So I guess I just don’t get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don’t understand where that aversion is coming from. If you’re not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it’s a bad idea to read through it for insights?
I’m reminded of the “subsequence” of The Level Above Mine, Competent Elites, Above Average AI Scientists, and That Magical Click.
Maybe mainstream philosophers just lack the aura of thousand-year-old rationalist vampires?
I’m quite sure they do. Right now I can’t think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is. :)
But that doesn’t mean that mainstream philosophers never make useful and original contributions on all kinds of subjects relevant to Less Wrong and even to friendly AI.
That (Jaynes) is a pretty high standard. But not impossibly high. As candidates, I would mention Jaakko Hintikka, Per Martin-Lof and the late David Lewis. If you are allowed to count economists, then I would also mention game theorists like Aumann, Binmore, and the late John Harsanyi. And if you allow philosophically inclined physicists like Jaynes, there are quite a few folks worth mentioning.
I’d never heard of Per Martin-Lof, thanks.
I of course am not definitive here, but I strongly suspect that from EY’s perspective it means precisely that.
If so, I don’t think he can maintain that position consistently, since he has already benefited from the work of many mainstream philosophers, and continues to do so—for example Bostrom on anthropic reasoning.
Maybe. But they have a self-deprecating sense of humor. Doesn’t that count for something?
Actually an expectation that studying this philosophy stuff would be of no use (or even can harm you), which is a more reflectively reliable judgment than mere emotional aversion. Might be incorrect, but can’t be influenced by arguing that aversion is irrelevant (not that you do argue this way, but summarizing the position with use of that word suggests doing that).
Thanks for the link to Eric Schwitzgebel; very interesting reading!
Because I expect it to teach very bad habits of thought that will lead people to be unable to do real work. Assume naturalism! Move on! NEXT!
Yes, that’s what most Quinean naturalists are doing...
Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I’m also still curious to hear your response to the specific example I’ve now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the “extrapolation” bit).
As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What’s the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don’t know.
But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the “standard” (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.
So… it’s not always bad. :)
Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
Thanks for your reply.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the ‘biases’ for effective branch processing are either disabled or worst of all negated, before a substitute is devised.
If you want to form a belief about, for example, FAI, it’s all nice that you don’t feel that the morality can result from some simple principles. If you want to build FAI—this branch (the generated morality that we agree with) is much much lower while it’s probability of success, really, isn’t that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there’s still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That’s what destroys corporations after they sink costs. They don’t even pursue cost-recovery option when it doesn’t coincide with prior effort and only utilizes part of prior effort.
Perhaps. But it is difficult to imagine any less complete problem dissolution being successful at actually shutting down that confused philosophical debate, and thus freeing those first-class minds to actually do those hypothetical useful things.
BTW, by “more” I meant “additional”: I meant that there “are many other useful things for philosophers… to be doing...” I’ve now clarified the wording in the original comment.
It might be useful, if only for gaining status and attention and funding, to connect your work directly to one or several academic fields. To present it as a synthesis of philosophy, computer science, and cognitive science (or some other combination of your choice.) When people ask me what LessWrong is, I generally say something like “It’s philosophy from a computer scientist’s perspective.” Most people can only put a mental label on something when they have a rough idea of what it’s like, and it’s not practical to say, “Well, our work isn’t like anything.”
That doesn’t mean you have to hire philosophers or join a philosophy department; it might not mean that you, personally, have to do anything. But I do think that more people would be interested, and have a smaller inferential distance, if LW ideas were generally presented as related to other disciplines.
Expanding on this, which section of my local Barnes And Noble is your (Eliezer) book going to be in? Philosophy seems like the best fit (aside from the best selling non-fiction) to get new interested readership.
Amazon’s “Books > Nonfiction > Social Sciences” contains things like Malcolm Gladwell and Predictably Irrational, which I think is the audience that Eliezer is targeting.
Just taking the example I happen to know about, Sarah-Jane Leslie works on the meaning of generics. (What do we mean when we say “Tigers have stripes” ? All tigers? Most tigers? Normal tigers? But then how do we account for true statements like “Tigers eat people” when most tigers don’t eat people, or “Peacocks have colorful tails” when female peacocks don’t have colorful tails?) She answers this question directly using evidence from cognitive science. I think it counts as question-dissolving.