Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism?
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.
Reply to charge that it is clearly false: Sorry, it doesn’t look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.
Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don’t link to a book on Amazon where I click “Surprise me” ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.
The CEV article’s “conflation” is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I’d seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn’t supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.
Thanks for your reply.
On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.
Your response on misrepresenting Quinean naturalism did not reply to this part: “Quinean naturalists don’t just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it.”
As for an example of dissolving certain questions into cognitive algorithms, I’m drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I’m writing.)
On CEV and extrapolation: You seem to agree that the distinction is useful, because you’ve used it yourself elsewhere (you just weren’t going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that’s not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.
I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.