Seems like you are trying to elaborate on Eliezer’s maxim Rationality is Systematized Winning. Some of what you mentioned implies shedding any kind of ideology, though sometimes wearing a credible mask of having one. Also being smarter than most people around you, both intellectually and emotionally. Of course, if you are already one of those people, then you don’t need rationality, because, in all likelihood, you have already succeeded in what yo
I think the thing I’m gesturing at here is related but different to the systemized winning thing.
Some distinctions that I think make sense. (But would defer to people who seem further ahead in this path than I)
Systemized Winning – The practice of identifying and doing the thing that maximizes your goal (or, if you’re not a maximizer, ensures a good distribution of satisfactory outcomes)
Law Thinking – (i.e. Law vs Tools) – Lawful thinking is having a theoretical understanding of what would be the optimal action for maximizing utility, given various constraints. This is a useful idea for a civilization to have. Whether it’s directly helpful for you to maximize your utility depends on your goals, environment, and shape-of-your-mental-faculties.
I’d guess for most humans (of average intelligence), what you want is for some else to do Law thinking, figuring out the best thing, figure out the best approximation of the best thing, and then distill it down to something you can easily learn.
Being a Robust Agent—Particular strategies, for pursuing your goals, wherein you strive to have rigorous policy-making, consistent preferences (or consistent ways to resolve inconsistency), ways to reliably trust yourself and others, etc.
You might summarize this as “the strategy of embodying lawful thinking to achieve your goals.” (not sure if that quite makes sense)
I expect this to be most useful for people who either
find rigorous policy-level, consistency-driven thinking easy, such that it’s just the most natural way for them to approach their problems
have an preference to ensure that their solutions to problems don’t break down in edge cases (i.e. nerds often like having explicit understandings of things independent of how useful it is)
people with goals that will likely cause them to run into edge cases, such that it’s more valuable to have figured out in advance how to handle those.
When you look at the Meta-Honesty post… I don’t think the average person will find it a particularly valuable tool for achieving their goals. But I expect there to be a class of person who actually needs it as a tool to figure out how to trust people in domains where it’s often necessary to hide or obfuscate information.
Whether you want your decision-theory robust enough such that Omega simulating you will give you a million dollars depends a lot on whether you expect Omega to actually be simulating you and making that decision. I know at least some people who are actually arranging their life with that sort of concern in mine.
I do think there’s an alternate frame where you just say “no, rationality is specifically about being a robust agent. There are other ways to be effective, but rationality is the particular way of being effective where you try to have cognitive patterns with good epistemology and robust decision theory.”
This is in tension with the “rationalists should win”, thing. Shrug.
I think it’s important to have at least one concept that is “anyone with goals should ultimately be trying to solve them the best way possible”, and at least one concept that is “you might consider specifically studying cognitive patterns and policies and a cluster of related things, as a strategy to pursue particular goals.”
I don’t think is quite the same thing as instrumental rationality (although it’s tightly entwined). If your goals are simple and well-understood, and you’re interfacing in a social domain with clear rules, the most instrumentally rational thing might be to not overthink it and follow common wisdom.
But it’s particularly important if you want to coordinate with other agents, over the long term. Especially on ambitious, complicated projects in novel domains.
On my initial read, I read this as saying “this is the right thing for some people, even when it isn’t instrumentally rational” (?!). But
I think it’s important to have at least one concept that is “anyone with goals should ultimately be trying to solve them the best way possible”, and at least one concept that is “you might consider specifically studying cognitive patterns and policies and a cluster of related things, as a strategy to pursue particular goals.”
makes me think this isn’t what you meant. Maybe clarify the OP?
I was meaning to say “becoming a robust agent may be the instrumentally rational thing for some people in some situation. For other people in other situations, it may not be helpful.”
I don’t know that “instrumental rationality” is that well defined, and there might be some people who would claim that “instrumental rationality” and what I (here) am calling “being a robust agent” are the same thing. I disagree with that frame, but it’s at least a cogent frame.
You might define “instrumental rationality” as “doing whatever thing is best for you according to your values”, or you might use it it to mean “using an understanding of, say, probability theory and game theory and cognitive science to improve your decision making”. I think it makes more sense to define it the first way, but I think some people might disagree with that.
If you define it the second way, then for some people – at least, people who aren’t that smart or good at probability/game-theory/cog-science – then “the instrumentally rational thing” might not be “the best thing.”
I’m actually somewhat confused about which definition Eliezer intended. He has a few posts (and HPMOR commentary) arguing that “the rational thing” just means “the best thing”. But he also notes that it makes sense to use the word “rationality” specifically when we’re talking about understanding cognitive algorithms.
Not sure whether that helped. (Holding off on updating the post till I’ve figured out what the confusion here is)
I define it the first way, and don’t see the case for the second way. Analogously, for a while, Bayesian reasoning was our best guess of what the epistemic Way might look like. But then we find out about logical induction, and that seems to tell us a little more about what to do when you’re embedded. So, we now see it would have been a mistake to define “epistemic rationality” as “adhering to the dictates of probability theory as best as possible”.
I think that Eliezer’s other usage of “instrumental rationality” points to fields of study for theoretical underpinning of effective action.
(not sure if this was clear, but I don’t feel strongly about which definition to use, I just wanted to disambiguate between definitions people might have been using)
I think that Eliezer’s other usage of “instrumental rationality” points to fields of study for theoretical underpinning of effective action.
This sounds right-ish (i.e. this sounds like something he might have meant). When I said “use probability and game theory and stuff” I didn’t mean “be a slave to whatever tools we happen to use right now”, I meant sort of as examples of “things you might use if you were trying to base your decisions and actions off of sound theoretical underpinnings.”
So I guess the thing I’m still unclear on (people’s common usage of words): Do most LWers think it is reasonable to call something “instrumentally rational” if you just sorta went with your gut without ever doing any kind of reflection (assuming your gut turned out to be trustworthy?).
Or are things only instrumentally rational if you had theoretical underpinnings? (Your definition says “no”, which seems fine. But it might leave you with an awkward distinction between “instrumentally rational decisions” and “decisions rooted in instrumental rationality.”)
I’m still unsure if this is dissolving confusion, or if the original post still seems like it needs editing.
Your definition says “no”, which seems fine. But it might leave you with an awkward distinction between “instrumentally rational decisions” and “decisions rooted in instrumental rationality.”
My definition was the first, which is “instrumental rationality = acting so you winyour values”. So, wouldn’t it say that following your gut was instrumentally rational? At least, if it’s a great idea in expectation given what you knew—I wouldn’t say lottery winners were instrumentally rational.
I guess the hangup is in pinning down “when things are actually good ideas in expectation”, given that it’s harder to know that without either lots of experience or clear theoretical underpinnings.
I think one of the things I was aiming for with Being a Robust Agent is “you set up the longterm goal of having your policies and actions have knowably good outcomes, which locally might be a setback for how capable you are, but allows you to reliably achieve longer term goals.”
Seems like you are trying to elaborate on Eliezer’s maxim Rationality is Systematized Winning. Some of what you mentioned implies shedding any kind of ideology, though sometimes wearing a credible mask of having one. Also being smarter than most people around you, both intellectually and emotionally. Of course, if you are already one of those people, then you don’t need rationality, because, in all likelihood, you have already succeeded in what yo
Hmm.
I think the thing I’m gesturing at here is related but different to the systemized winning thing.
Some distinctions that I think make sense. (But would defer to people who seem further ahead in this path than I)
Systemized Winning – The practice of identifying and doing the thing that maximizes your goal (or, if you’re not a maximizer, ensures a good distribution of satisfactory outcomes)
Law Thinking – (i.e. Law vs Tools) – Lawful thinking is having a theoretical understanding of what would be the optimal action for maximizing utility, given various constraints. This is a useful idea for a civilization to have. Whether it’s directly helpful for you to maximize your utility depends on your goals, environment, and shape-of-your-mental-faculties.
I’d guess for most humans (of average intelligence), what you want is for some else to do Law thinking, figuring out the best thing, figure out the best approximation of the best thing, and then distill it down to something you can easily learn.
Being a Robust Agent—Particular strategies, for pursuing your goals, wherein you strive to have rigorous policy-making, consistent preferences (or consistent ways to resolve inconsistency), ways to reliably trust yourself and others, etc.
You might summarize this as “the strategy of embodying lawful thinking to achieve your goals.” (not sure if that quite makes sense)
I expect this to be most useful for people who either
find rigorous policy-level, consistency-driven thinking easy, such that it’s just the most natural way for them to approach their problems
have an preference to ensure that their solutions to problems don’t break down in edge cases (i.e. nerds often like having explicit understandings of things independent of how useful it is)
people with goals that will likely cause them to run into edge cases, such that it’s more valuable to have figured out in advance how to handle those.
When you look at the Meta-Honesty post… I don’t think the average person will find it a particularly valuable tool for achieving their goals. But I expect there to be a class of person who actually needs it as a tool to figure out how to trust people in domains where it’s often necessary to hide or obfuscate information.
Whether you want your decision-theory robust enough such that Omega simulating you will give you a million dollars depends a lot on whether you expect Omega to actually be simulating you and making that decision. I know at least some people who are actually arranging their life with that sort of concern in mine.
I do think there’s an alternate frame where you just say “no, rationality is specifically about being a robust agent. There are other ways to be effective, but rationality is the particular way of being effective where you try to have cognitive patterns with good epistemology and robust decision theory.”
This is in tension with the “rationalists should win”, thing. Shrug.
I think it’s important to have at least one concept that is “anyone with goals should ultimately be trying to solve them the best way possible”, and at least one concept that is “you might consider specifically studying cognitive patterns and policies and a cluster of related things, as a strategy to pursue particular goals.”
On my initial read, I read this as saying “this is the right thing for some people, even when it isn’t instrumentally rational” (?!). But
makes me think this isn’t what you meant. Maybe clarify the OP?
I was meaning to say “becoming a robust agent may be the instrumentally rational thing for some people in some situation. For other people in other situations, it may not be helpful.”
I don’t know that “instrumental rationality” is that well defined, and there might be some people who would claim that “instrumental rationality” and what I (here) am calling “being a robust agent” are the same thing. I disagree with that frame, but it’s at least a cogent frame.
You might define “instrumental rationality” as “doing whatever thing is best for you according to your values”, or you might use it it to mean “using an understanding of, say, probability theory and game theory and cognitive science to improve your decision making”. I think it makes more sense to define it the first way, but I think some people might disagree with that.
If you define it the second way, then for some people – at least, people who aren’t that smart or good at probability/game-theory/cog-science – then “the instrumentally rational thing” might not be “the best thing.”
I’m actually somewhat confused about which definition Eliezer intended. He has a few posts (and HPMOR commentary) arguing that “the rational thing” just means “the best thing”. But he also notes that it makes sense to use the word “rationality” specifically when we’re talking about understanding cognitive algorithms.
Not sure whether that helped. (Holding off on updating the post till I’ve figured out what the confusion here is)
I define it the first way, and don’t see the case for the second way. Analogously, for a while, Bayesian reasoning was our best guess of what the epistemic Way might look like. But then we find out about logical induction, and that seems to tell us a little more about what to do when you’re embedded. So, we now see it would have been a mistake to define “epistemic rationality” as “adhering to the dictates of probability theory as best as possible”.
I think that Eliezer’s other usage of “instrumental rationality” points to fields of study for theoretical underpinning of effective action.
(not sure if this was clear, but I don’t feel strongly about which definition to use, I just wanted to disambiguate between definitions people might have been using)
This sounds right-ish (i.e. this sounds like something he might have meant). When I said “use probability and game theory and stuff” I didn’t mean “be a slave to whatever tools we happen to use right now”, I meant sort of as examples of “things you might use if you were trying to base your decisions and actions off of sound theoretical underpinnings.”
So I guess the thing I’m still unclear on (people’s common usage of words): Do most LWers think it is reasonable to call something “instrumentally rational” if you just sorta went with your gut without ever doing any kind of reflection (assuming your gut turned out to be trustworthy?).
Or are things only instrumentally rational if you had theoretical underpinnings? (Your definition says “no”, which seems fine. But it might leave you with an awkward distinction between “instrumentally rational decisions” and “decisions rooted in instrumental rationality.”)
I’m still unsure if this is dissolving confusion, or if the original post still seems like it needs editing.
My definition was the first, which is “instrumental rationality = acting so you winyour values”. So, wouldn’t it say that following your gut was instrumentally rational? At least, if it’s a great idea in expectation given what you knew—I wouldn’t say lottery winners were instrumentally rational.
I guess the hangup is in pinning down “when things are actually good ideas in expectation”, given that it’s harder to know that without either lots of experience or clear theoretical underpinnings.
I think one of the things I was aiming for with Being a Robust Agent is “you set up the longterm goal of having your policies and actions have knowably good outcomes, which locally might be a setback for how capable you are, but allows you to reliably achieve longer term goals.”