Thanks so much. I didn’t know about Quine, and from what you’ve quoted it seems quite clearly in the same vein as LessWrong.
Also, out of curiosity, do you know if anything’s been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes—does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
What you care about determines what your explorations learn about. An AI that didn’t care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn’t learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.
What, you mean in mainstream philosophy? I don’t think mainstream philosophers think that way, even Quineans. The best ones would say gravely, “Yes, goals are important” and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
I actually don’t think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with ‘decisions’ in the title in AI: A Modern Approach.
My first exposure was his mathematical logic book. At the time, I didn’t even realize he had a reputation as a philosopher per se. (I knew from the back cover of the book that he was in the philosophy department at Harvard, but I just assumed that that was where anyone who got sufficiently “foundational” about their mathematics got put.)
Ah, see, when I learned a little logic, I shuddered, muttered “That is not dead which can unsleeping lie,” and moved on. I’ll come back to it if it ever seems useful though.
Yah, I sometimes joke that logicians are viewed by mathematicians in the same way that mathematicians are viewed by normal people. Logic makes complete sense to me, but some of my professional mathematician friends cannot understand my tastes at all. I, on the other hand, cannot understand how one can get interested in homological algebra or other such things, when there are all these really pressing logical issues to solve :-)
Ok, this is actually an area on which I’m not well-informed, which is why I’m asking you instead of “looking it up”—I’d like to better understand exactly what I want to look up.
Let’s say we want to build a machine that can form accurate predictions and models/categories from observational data of the sort we encounter in the real world—somewhat noisy, and mostly “uninteresting” in the sense that you have to compress or ignore some of the data in order to make sense of it. Let’s say the approach is very general—we’re not trying to solve a specific problem and hard-coding in a lot of details about that problem, we’re trying to make something more like an infant.
Would learning happen more effectively if the machine had some kind of positive/negative reinforcement? For example, if the goal is “find the red ball and fetch it” (which requires learning how to recognize objects and also how to associate movements in space with certain kinds of variation in the 2d visual field) would it help if there was something called “pain” which assigned a cost to bumping into walls, or something called “pleasure” which assigned a benefit to successfully fetching the ball?
Is the fact that animals want food and positive social attention necessary to their ability to learn efficiently about the world? We’re evolved to narrow our attention to what’s most important for survival—we notice motion more than we notice still figures, we’re better at recognizing faces than arbitrary objects. Is it possible that any process needs to have “desires” or “priorities” of this sort in order to narrow its attention enough to learn efficiently?
To some extent, most learning algorithms have cost functions associated with failure or error, even the one-line formulas. It would be a bit silly to say the Mumford-Shaw functional feels pleasure and pain. So I guess there’s also the issue of clarifying exactly what desires/values are.
Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes—does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.
It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it’s often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they’re optimized for, purpose in the designers.
Thanks so much. I didn’t know about Quine, and from what you’ve quoted it seems quite clearly in the same vein as LessWrong.
Also, out of curiosity, do you know if anything’s been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes—does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
What you care about determines what your explorations learn about. An AI that didn’t care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn’t learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.
That was my intuition. Just wanted to know if there’s more out there.
What, you mean in mainstream philosophy? I don’t think mainstream philosophers think that way, even Quineans. The best ones would say gravely, “Yes, goals are important” and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
I actually don’t think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
Yes, that’s about right.
AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with ‘decisions’ in the title in AI: A Modern Approach.
Thank you!
My first exposure was his mathematical logic book. At the time, I didn’t even realize he had a reputation as a philosopher per se. (I knew from the back cover of the book that he was in the philosophy department at Harvard, but I just assumed that that was where anyone who got sufficiently “foundational” about their mathematics got put.)
Ah, see, when I learned a little logic, I shuddered, muttered “That is not dead which can unsleeping lie,” and moved on. I’ll come back to it if it ever seems useful though.
Yah, I sometimes joke that logicians are viewed by mathematicians in the same way that mathematicians are viewed by normal people. Logic makes complete sense to me, but some of my professional mathematician friends cannot understand my tastes at all. I, on the other hand, cannot understand how one can get interested in homological algebra or other such things, when there are all these really pressing logical issues to solve :-)
That is exactly why I enjoy learning about logic.
Will Sawin, aspiring necromancer… That should be on your business card.
I should have a business card.
Could you clarify what you mean? When I parse your second paragraph, it comes across to my mind as three or four separate questions...
Ok, this is actually an area on which I’m not well-informed, which is why I’m asking you instead of “looking it up”—I’d like to better understand exactly what I want to look up.
Let’s say we want to build a machine that can form accurate predictions and models/categories from observational data of the sort we encounter in the real world—somewhat noisy, and mostly “uninteresting” in the sense that you have to compress or ignore some of the data in order to make sense of it. Let’s say the approach is very general—we’re not trying to solve a specific problem and hard-coding in a lot of details about that problem, we’re trying to make something more like an infant.
Would learning happen more effectively if the machine had some kind of positive/negative reinforcement? For example, if the goal is “find the red ball and fetch it” (which requires learning how to recognize objects and also how to associate movements in space with certain kinds of variation in the 2d visual field) would it help if there was something called “pain” which assigned a cost to bumping into walls, or something called “pleasure” which assigned a benefit to successfully fetching the ball?
Is the fact that animals want food and positive social attention necessary to their ability to learn efficiently about the world? We’re evolved to narrow our attention to what’s most important for survival—we notice motion more than we notice still figures, we’re better at recognizing faces than arbitrary objects. Is it possible that any process needs to have “desires” or “priorities” of this sort in order to narrow its attention enough to learn efficiently?
To some extent, most learning algorithms have cost functions associated with failure or error, even the one-line formulas. It would be a bit silly to say the Mumford-Shaw functional feels pleasure and pain. So I guess there’s also the issue of clarifying exactly what desires/values are.
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.
It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it’s often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they’re optimized for, purpose in the designers.
What do you mean by a goal? Or learning?