I find it interesting, even telling, that nobody has yet challenged the assumptions behind the proposition “Rationality is a tool for accuracy,” which would be that “rationality is the best tool for accuracy” and/or that “rationality is the sole tool that can be used to achieve accuracy.”
Why would someone challenge a proposition that they agree with? While I don’t see that the proposition “Rationality is a tool for accuracy” presumes “Rationality is the tool for accuracy”, I’d agree with the latter anyway. Rationality is the only effective tool there is, and more than merely by definition. Praying to the gods for revelation doesn’t work. Making stuff up doesn’t work. Meditating in a cave won’t tell you what the stars are made of. Such things as observing the world, updating beliefs from experience, making sure that whatever you believe implies something about what you will observe, and so on: these are some of the things in the rationality toolbox, these are the things that work.
If you disagree with this, please go ahead and challenge it yourself.
Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray.
If it leads to winning more than other possibilities, it’s rational to do it. If your utility function values pretending to be stupid so you’ll be well-liked by idiots, that is winning.
Key phrase. The accurate map leads to more winning. Acknowledging that X obviously doesn’t work, but pretending that it does in order to win is very different from thinking X works.
ETA: It is all fine and dandy that I am getting upvotes for this, and by all means don’t stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.
What evidence is there that the map is static? We make maps and the world transforms. Rivers become canyons; mountains become mole hills (pardon the rhetorical ring I could not resist). Given that all maps are approximations isn’t it rational to moderate one’s navigation with the occasional off course exploration to verify that not drastic changes have occurred in the geography?
And because I feel the analogy is pretty far removed at this point, what I mean by that, is that if we have charted a goal-orientation based on our map that puts us on a specific trajectory, would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.
The evidence that the territory is static is called Physics. The laws does not change, and the elegant counterargument against anti-inductionism is that if induction didn’t work our brains would stop working, because our brains depend on static laws.
There is no evidence whatsoever that the map is static. It should never be, you should always be prepared to update, there isn’t a universal prior that lets you reason inductively about any universe.
The laws don’t change by definition. If something changes, we try to figure out some invariant description of how it changes, and call that a law. We presume a law even when we don’t know the invariant description (as is the case with QM&gravity combined). If there was magic in the real world, we’d do the same thing and have same invariant laws of magic, even though number of symmetries may have been lower.
The territory is governed by unchanging perfectly global basic mathematically simple universal laws.
The Schrödinger equation does not change. Ever.
And further more, you can plot the time dimension as a spatial dimension and then navigate a model of an unchanging structure of world lines. That is an accepted model called the Block Universe in General Relativity.
The Block universe is ‘static’ that is, without time.
There is reason to believe the same can be done in quantum mechanics.
would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.
Why would that not be part of the trajectory traced out by your goal-orientation, or a natural interaction between the fuzziness of your map and your goals?
Well you would try to have that as part of your trajectory, but what I am suggesting is that there will always be things beyond your planning, beyond your reasoning, so in light of this perhaps we should strategically deviate from those plans every now and then to double check what else is out there.
I’m still confused by what you’re considering inside my reasoning and outside my planning / reasoning. If I say “spend 90% of your time in the area with the highest known EV and 10% of your time measuring areas which have at least a 1% chance of having higher reward than the current highest EV, if they exist,” then isn’t my ignorance about the world part of my plan / reasoning, such that I don’t need to deviate from those plans to double check?
It is all fine and dandy that I am getting upvotes for this, and by all means don’t stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.
Personally, I think that behavior should be rewarded.
Personally, I think that behavior should be rewarded.
Thank you, and I share that view. Why don’t we see everyone doing it? Why, I would be overjoyed if everyone was so firmly trained in Rat101 that comments like these were not special.
But now I am deviating into a should-world + diff.
Though among LW members, people probably don’t need to be encouraged to use basic rationality. If we could just upvote and downvote people’s arguments in real life...
I’m also considering the possibility that MHD was asking why we don’t see everyone using Rationality 101.
Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray.
I’m talking about the real world, not an imaginary one. You can make up imaginary worlds to come up with a counterexample to any generalisation you hear, but it amounts to saying “Suppose that were false? Then it would be false!”
Would you agree that the rate of speed that you try to do something is directly correlated to the accuracy you can produce?
I imagine the faster you try to do something to poorer your results will be. Do you disagree?
If it is true that at times accuracy demands some degree of suspension/inaction, then I would suggest to you that tools such as praying, meditating, and “making stuff up” serve to slow the individual down, allowing for better accuracy in the long term. Whereas, increasing intentionality will beyond some threshold decrease overall results.
Slowing down will only give better results if it’s the right sort of slowing down. For example, slowing down to better attend to the job, or slowing down to avoid exhausting oneself. But I wasn’t talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don’t work.
It may be very useful to sit for a while every day doing nothing but contemplating one’s own mind, but the use of that lies in more clearly observing the thing that one studies in meditation, i.e. one’s own mind.
But I wasn’t talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don’t work.
I am suggesting the task they perform has two levels. The first is a surface structure, defined by whatever religious or creative purpose the performer thinks they serve. In my opinion, the medium of this level is completely arbitrary. It does not matter what you pray to, or if you meditate or pray, or play baseball for that matter. The importance of such actions comes from their deep structure, which develops beneficial cognitive, emotional, or physical habits.
Prayer is in many cultures a means of cultivating patience and concentration. The idea, which has been verified by the field of psychology, is that patience, concentration, reverence, toleration, empathy, sympathy, anxiety, serenity, these and many other cognitive dispositions are not the result of a personality type, but rather the result of intentional development.
Within the last several decades there has been a revolution within the field of psychology as to what action is. Previously cognitive actions were not thought of as actions, and therefore not believed to be things that you develop. It was believed that some people where just born kinder, more stressed, more sympathetic, etc, that there were cognitive types. We know now is that this is not true. While it is true that everyone probably is born with a different degree of competency in these various cognitive actions (just as some people are probably born slightly better at running, jumping, or other more physical actions), more important than innate talent is the amount of work someone puts into a capacity. Someone born with a below average disposition for running can work hard and become relatively fast. In the same way, while there are some biological grounds and limitations, for the majority of people, the total level of capacity they are able to achieve in some action is determined by the amount of work they devote to improving that action.
If you work out your tolerance muscles, you will become able to exhibit greater degrees of tolerance. If you work out your concentration muscle, you will be able to concentrate to greater degrees. How do you work out tolerance or concentration muscles? By engaging in tasks that require concentration or tolerance. So, does praying 5 times a day to some God have an impact on reality? Well if you mean in the sense that a “God” listens to and acts on your prayers, No. But if you mean in the sense that the commitment to keeping a schedule and concentration on one thing 5 times, then yes it does. It impacts the reality of your cognition and consciousness.
So returning to what I was saying about suspending action. You interpreted it as “avoiding a task” but I would suggest that suspending action here has deeper meaning. It is not avoiding a task, but developing competencies in caution, accepting a locus of control, limitations, and acceptance.
There are more uses in meditation than just active reflection of thought. In fact, most meditation discourages thought. The purpose is to clear your mind, suggesting that there is a benefit in reducing intentionality to some degree.
Now, let me be clear that what I am advocating here is very much a value based position. I am saying there is a benefit in exercising the acceptance of limitations to some degree , a benefit in caution to some degree, etc. I would be interested to know do you disagree?
That is a lot of words, but it seems to me that all you are saying is that meditation (misspelled as “mediation” throughout) can serve certain useful purposes. So will a spade.
BTW, slowing a drum rhythm down for a beginner to hear how it goes is more difficult than playing it to speed.
Cox’s theorem does show that Bayesian probability theory (around here a.k.a. epistemic rationality) is the only way to assign numbers to beliefs which satisfies certain desiderata.
This is in a sense the point of my essay! I define rationality as a tool for accuracy, because I believed that was a commonly held position on this blog (perhaps I was wrong). But if you look at the overall point of my essay, it is to suggest that there are times when what is desired is achieved without rationality, therefore suggesting alternative tools for accuracy.
As to the idea of a “best tool”, as I outline in my opening, I do not think such a thing exists. A best tool implies a universal tool for some task. I think that there are many tools for accuracy, just as there are many tools for cooking. In my opinion it all depends on what ingredients you are faced with and what you want to make out of them.
I agree, and the thing about taking your selection process meta is that you have to stop at some point. If you have more than 1 tool for choosing tools, how do you choose which one to pick for a given situation? You’d need a tool that chooses tools that chooses tools! Sooner or later you have to have a single top level tool or algorithm that actually kicks things into motion.
This is where we disagree. To have rationality be the only tool for choosing tools is to assume all meaningful action is derived from the intentional transformation. I disagree with this idea, and I think modern psychology disagrees as well. It is not only possible, it is at times essential to have meaningful action that is not intentionally driven. If you accept this statement as fact, then it implies the need for a secondary system of tool choosing. More specifically, a type of emergency brake system. You have rationality that is the choosing system, and then the secondary system that shuts the system down when it is necessary to halt further production of intentionality.
[I]t is at times essential to have meaningful action that is not intentionally driven.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time.
Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
These evaluations themselves are part of yet another tool.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly.
Yes, exactly.
if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^
So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition.
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency).
Overall, what I’ve been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”. This is true even though the human brain actually physically implements a person’s values as multiple modules operating at the same time rather than a single central dispatch.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
Yeah, this is exactly what I am arguing.
For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”.
Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.
I’m no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as “I want X and Z and W and A1 through A22.” Talking about whether I have one goal or 26 goals therefore seems like a distraction.
In regards to why it’s possible, I’ll just echo what TheOtherDaveSaid.
The reason it’s helpful to try for a single top-level utility function is because otherwise, whenever there’s a conflict among the many many things we value, we’d have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?
Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
So I guess it comes down to: how important is it to you that your values are self-consistent?
More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
Well, there’s always the approach of letting all of me influence my actions and seeing what I do.
If you’re going to use the word rationality, use its definition as given here. Defining rationality as accuracy just leads to confusion and ultimately bad karma.
As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes’ theorem before you claim that there is no such thing.
I am willing to look at your defintion of rationality, but don’t you see how it is problematic to attempt to prescribe one static defintion to a word?
As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes’ theorem before you claim that there is no such thing.
Ok, so you do believe that bayes theorem is a universal tool?
I find it interesting, even telling, that nobody has yet challenged the assumptions behind the proposition “Rationality is a tool for accuracy,” which would be that “rationality is the best tool for accuracy” and/or that “rationality is the sole tool that can be used to achieve accuracy.”
Why would someone challenge a proposition that they agree with? While I don’t see that the proposition “Rationality is a tool for accuracy” presumes “Rationality is the tool for accuracy”, I’d agree with the latter anyway. Rationality is the only effective tool there is, and more than merely by definition. Praying to the gods for revelation doesn’t work. Making stuff up doesn’t work. Meditating in a cave won’t tell you what the stars are made of. Such things as observing the world, updating beliefs from experience, making sure that whatever you believe implies something about what you will observe, and so on: these are some of the things in the rationality toolbox, these are the things that work.
If you disagree with this, please go ahead and challenge it yourself.
Supposing that you lived in a universe where you could pray for and would then always receive infallible instruction, it would be rational to pray.
If it leads to winning more than other possibilities, it’s rational to do it. If your utility function values pretending to be stupid so you’ll be well-liked by idiots, that is winning.
Key phrase. The accurate map leads to more winning. Acknowledging that X obviously doesn’t work, but pretending that it does in order to win is very different from thinking X works.
ETA: It is all fine and dandy that I am getting upvotes for this, and by all means don’t stop, but really I am just a novice applying Rationality 101 whereever I see fit in order to earn my black belt.
What evidence is there that the map is static? We make maps and the world transforms. Rivers become canyons; mountains become mole hills (pardon the rhetorical ring I could not resist). Given that all maps are approximations isn’t it rational to moderate one’s navigation with the occasional off course exploration to verify that not drastic changes have occurred in the geography?
And because I feel the analogy is pretty far removed at this point, what I mean by that, is that if we have charted a goal-orientation based on our map that puts us on a specific trajectory, would it not be beneficial to occasional abandon our goal-orientation to explore other trajectories for potentially new and more lucrative paths.
The evidence that the territory is static is called Physics. The laws does not change, and the elegant counterargument against anti-inductionism is that if induction didn’t work our brains would stop working, because our brains depend on static laws.
There is no evidence whatsoever that the map is static. It should never be, you should always be prepared to update, there isn’t a universal prior that lets you reason inductively about any universe.
The territory is not static. Have you ever heard of quantum physics?
Quantum physics is invariant under temporal translation too.
The laws don’t change by definition. If something changes, we try to figure out some invariant description of how it changes, and call that a law. We presume a law even when we don’t know the invariant description (as is the case with QM&gravity combined). If there was magic in the real world, we’d do the same thing and have same invariant laws of magic, even though number of symmetries may have been lower.
The territory is governed by unchanging perfectly global basic mathematically simple universal laws.
The Schrödinger equation does not change. Ever.
And further more, you can plot the time dimension as a spatial dimension and then navigate a model of an unchanging structure of world lines. That is an accepted model called the Block Universe in General Relativity. The Block universe is ‘static’ that is, without time.
There is reason to believe the same can be done in quantum mechanics.
Why would that not be part of the trajectory traced out by your goal-orientation, or a natural interaction between the fuzziness of your map and your goals?
Well you would try to have that as part of your trajectory, but what I am suggesting is that there will always be things beyond your planning, beyond your reasoning, so in light of this perhaps we should strategically deviate from those plans every now and then to double check what else is out there.
I’m still confused by what you’re considering inside my reasoning and outside my planning / reasoning. If I say “spend 90% of your time in the area with the highest known EV and 10% of your time measuring areas which have at least a 1% chance of having higher reward than the current highest EV, if they exist,” then isn’t my ignorance about the world part of my plan / reasoning, such that I don’t need to deviate from those plans to double check?
Personally, I think that behavior should be rewarded.
Thank you, and I share that view. Why don’t we see everyone doing it? Why, I would be overjoyed if everyone was so firmly trained in Rat101 that comments like these were not special.
But now I am deviating into a should-world + diff.
I’m pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.
It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.
EDIT: Just in case this comes off as disparaging LW’s upvote generosity or average comment quality, it’s not.
Though among LW members, people probably don’t need to be encouraged to use basic rationality. If we could just upvote and downvote people’s arguments in real life...
I’m also considering the possibility that MHD was asking why we don’t see everyone using Rationality 101.
I’m talking about the real world, not an imaginary one. You can make up imaginary worlds to come up with a counterexample to any generalisation you hear, but it amounts to saying “Suppose that were false? Then it would be false!”
Richard,
Would you agree that the rate of speed that you try to do something is directly correlated to the accuracy you can produce?
I imagine the faster you try to do something to poorer your results will be. Do you disagree?
If it is true that at times accuracy demands some degree of suspension/inaction, then I would suggest to you that tools such as praying, meditating, and “making stuff up” serve to slow the individual down, allowing for better accuracy in the long term. Whereas, increasing intentionality will beyond some threshold decrease overall results.
Does that make sense?
Slowing down will only give better results if it’s the right sort of slowing down. For example, slowing down to better attend to the job, or slowing down to avoid exhausting oneself. But I wasn’t talking about praying, meditating, and making stuff up as ways of avoiding the task, but as ways of performing it. As such, they don’t work.
It may be very useful to sit for a while every day doing nothing but contemplating one’s own mind, but the use of that lies in more clearly observing the thing that one studies in meditation, i.e. one’s own mind.
I am suggesting the task they perform has two levels. The first is a surface structure, defined by whatever religious or creative purpose the performer thinks they serve. In my opinion, the medium of this level is completely arbitrary. It does not matter what you pray to, or if you meditate or pray, or play baseball for that matter. The importance of such actions comes from their deep structure, which develops beneficial cognitive, emotional, or physical habits.
Prayer is in many cultures a means of cultivating patience and concentration. The idea, which has been verified by the field of psychology, is that patience, concentration, reverence, toleration, empathy, sympathy, anxiety, serenity, these and many other cognitive dispositions are not the result of a personality type, but rather the result of intentional development.
Within the last several decades there has been a revolution within the field of psychology as to what action is. Previously cognitive actions were not thought of as actions, and therefore not believed to be things that you develop. It was believed that some people where just born kinder, more stressed, more sympathetic, etc, that there were cognitive types. We know now is that this is not true. While it is true that everyone probably is born with a different degree of competency in these various cognitive actions (just as some people are probably born slightly better at running, jumping, or other more physical actions), more important than innate talent is the amount of work someone puts into a capacity. Someone born with a below average disposition for running can work hard and become relatively fast. In the same way, while there are some biological grounds and limitations, for the majority of people, the total level of capacity they are able to achieve in some action is determined by the amount of work they devote to improving that action. If you work out your tolerance muscles, you will become able to exhibit greater degrees of tolerance. If you work out your concentration muscle, you will be able to concentrate to greater degrees. How do you work out tolerance or concentration muscles? By engaging in tasks that require concentration or tolerance. So, does praying 5 times a day to some God have an impact on reality? Well if you mean in the sense that a “God” listens to and acts on your prayers, No. But if you mean in the sense that the commitment to keeping a schedule and concentration on one thing 5 times, then yes it does. It impacts the reality of your cognition and consciousness.
So returning to what I was saying about suspending action. You interpreted it as “avoiding a task” but I would suggest that suspending action here has deeper meaning. It is not avoiding a task, but developing competencies in caution, accepting a locus of control, limitations, and acceptance. There are more uses in meditation than just active reflection of thought. In fact, most meditation discourages thought. The purpose is to clear your mind, suggesting that there is a benefit in reducing intentionality to some degree. Now, let me be clear that what I am advocating here is very much a value based position. I am saying there is a benefit in exercising the acceptance of limitations to some degree , a benefit in caution to some degree, etc. I would be interested to know do you disagree?
That is a lot of words, but it seems to me that all you are saying is that meditation (misspelled as “mediation” throughout) can serve certain useful purposes. So will a spade.
BTW, slowing a drum rhythm down for a beginner to hear how it goes is more difficult than playing it to speed.
Along with religion, praying, and making stuff up. Meditating (thanks for the correction) was just an example.
Oh, I also don’t get the spade comment either. I mean I agree a spade has useful purposes but what is the point of saying so here?
Not exactly sure what you are trying to express here. Do you mind further explanation?
Cox’s theorem does show that Bayesian probability theory (around here a.k.a. epistemic rationality) is the only way to assign numbers to beliefs which satisfies certain desiderata.
Aliciaparr,
This is in a sense the point of my essay! I define rationality as a tool for accuracy, because I believed that was a commonly held position on this blog (perhaps I was wrong). But if you look at the overall point of my essay, it is to suggest that there are times when what is desired is achieved without rationality, therefore suggesting alternative tools for accuracy. As to the idea of a “best tool”, as I outline in my opening, I do not think such a thing exists. A best tool implies a universal tool for some task. I think that there are many tools for accuracy, just as there are many tools for cooking. In my opinion it all depends on what ingredients you are faced with and what you want to make out of them.
Maybe think about it this way: what we mean by “rationality” isn’t a single tool, it’s a way of choosing tools.
That is just pushing it back one level of meta-analysis. The way of choosing tools is still a tool. It is a tool for choosing tools.
I agree, and the thing about taking your selection process meta is that you have to stop at some point. If you have more than 1 tool for choosing tools, how do you choose which one to pick for a given situation? You’d need a tool that chooses tools that chooses tools! Sooner or later you have to have a single top level tool or algorithm that actually kicks things into motion.
This is where we disagree. To have rationality be the only tool for choosing tools is to assume all meaningful action is derived from the intentional transformation. I disagree with this idea, and I think modern psychology disagrees as well. It is not only possible, it is at times essential to have meaningful action that is not intentionally driven. If you accept this statement as fact, then it implies the need for a secondary system of tool choosing. More specifically, a type of emergency brake system. You have rationality that is the choosing system, and then the secondary system that shuts the system down when it is necessary to halt further production of intentionality.
If by “not intentionally driven” you mean things like instincts and intuitions, I agree strongly. For one thing, the cerebral approach is way too slow for circumstances that require immediate reactions. There is also an aesthetic component to consider; I kind of enjoy being surprised and shocked from time to time.
Looking at a situation from the outside, how do you determine whether intentional or automatic action is best? From another angle, if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn’t, or vice versa, what (if anything) would you pick?
These evaluations themselves are part of yet another tool.
Yes, exactly.
I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly intentional in recent days, weeks, etc, I try to rely more on instinct and intuition. It is rarely the case that I am relying too heavily on the later ^_^
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency).
Overall, what I’ve been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one’s “utility function” or “terminal values”. This is true even though the human brain actually physically implements a person’s values as multiple modules operating at the same time rather than a single central dispatch.
In your article, you seemed to be saying that you specifically think that one shouldn’t have a single “final decision” function at the top of the meta stack. That’s not going to be an easily accepted argument around here, for the reasons I stated above.
Yeah, this is exactly what I am arguing.
Could you explain the technical reasons more, or point me to some essays where I could read about this? I am still not convinced why it is more benefical to have a single operating system.
I’m no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as “I want X and Z and W and A1 through A22.” Talking about whether I have one goal or 26 goals therefore seems like a distraction.
In regards to why it’s possible, I’ll just echo what TheOtherDaveSaid.
The reason it’s helpful to try for a single top-level utility function is because otherwise, whenever there’s a conflict among the many many things we value, we’d have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two?
Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way?
So I guess it comes down to: how important is it to you that your values are self-consistent?
More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
Well, there’s always the approach of letting all of me influence my actions and seeing what I do.
Thanks for the link. I’ll respond back when I get a chance to read it.
If you’re going to use the word rationality, use its definition as given here. Defining rationality as accuracy just leads to confusion and ultimately bad karma.
As for a universal tool for some task? (i.e. updating on your belief) Well you really should take a look at Bayes’ theorem before you claim that there is no such thing.
I am willing to look at your defintion of rationality, but don’t you see how it is problematic to attempt to prescribe one static defintion to a word?
Ok, so you do believe that bayes theorem is a universal tool?