Thank you for sharing this; there are several useful conceptual tools in here. I like the way you’ve found crisply different adjectives to describe different kinds of freedom, and I like the way you’re thinking about the computational costs of surplus choices.
Building on that last point a bit, I might say that a savvy agent who has already evaluated N choices could try to keep a running estimate of their expected gains from choosing the best option available after considering X more choices and then compare that gain to their cost of computing the optimal choice out of X + N options. Right, like if the utility of an arbitrary choice follows anything like a normal distribution, then as N increases, we expect U(N+X) to have tinier and tinier advantages over U(N), because N choices already cover most of the distribution, so it’s unlikely that an even better choice is available within the X additional choices you look at, and even if you do find a better choice, it’s probably only slightly better. Yet for most humans, computing the best choice out of N+X options is more costly than computing the best choice for only N options, because you start to lose track of the details of the various options you’re considering as you add more and more possibilities to the list, and the list starts to feel boring or overwhelming, so it gets harder to focus. So there’s sort of a natural stopping point where the cost of considering X additional options can be confidently predicted to outweigh the expected benefit of considering X additional options, and when you reach that point, you should stop and pick the best choice you’ve already researched.
I like having access to at least some higher-order freedoms because I enjoy the sensation of planning and working toward long-term goal, but I don’t understand why the order of a freedom is important enough to justify orienting our entire system of ethics around it. Right, like, I can imagine some extremely happy futures where everyone has stable access to dozens of high-quality choices in all areas of their lives, but, sadly, none of those choices exceed order 4, and none of them ever will. I think I’d take that future over our present and be quite grateful for the exchange. On the other hand, I can imagine some extremely dark futures where the order of choices is usually increasing for most people, because, e.g., they’re becoming steadily smarter and/or more resilient and they live in a complicated world, but they’re trapped in a kind of grindy hellscape where they have to constantly engage in that sort of long-term planning in order to purchase moderately effective relief from their otherwise constant suffering.
So I’d question whether the order of freedoms is (a) one interesting heuristic that is good to look at when considering possible futures, or (b) actually the definition of what it would mean to win. If it’s (b), I think you have some more explaining to do.
Thank you for sharing this; there are several useful conceptual tools in here. I like the way you’ve found crisply different adjectives to describe different kinds of freedom, and I like the way you’re thinking about the computational costs of surplus choices.
Building on that last point a bit, I might say that a savvy agent who has already evaluated N choices could try to keep a running estimate of their expected gains from choosing the best option available after considering X more choices and then compare that gain to their cost of computing the optimal choice out of X + N options. Right, like if the utility of an arbitrary choice follows anything like a normal distribution, then as N increases, we expect U(N+X) to have tinier and tinier advantages over U(N), because N choices already cover most of the distribution, so it’s unlikely that an even better choice is available within the X additional choices you look at, and even if you do find a better choice, it’s probably only slightly better. Yet for most humans, computing the best choice out of N+X options is more costly than computing the best choice for only N options, because you start to lose track of the details of the various options you’re considering as you add more and more possibilities to the list, and the list starts to feel boring or overwhelming, so it gets harder to focus. So there’s sort of a natural stopping point where the cost of considering X additional options can be confidently predicted to outweigh the expected benefit of considering X additional options, and when you reach that point, you should stop and pick the best choice you’ve already researched.
I like having access to at least some higher-order freedoms because I enjoy the sensation of planning and working toward long-term goal, but I don’t understand why the order of a freedom is important enough to justify orienting our entire system of ethics around it. Right, like, I can imagine some extremely happy futures where everyone has stable access to dozens of high-quality choices in all areas of their lives, but, sadly, none of those choices exceed order 4, and none of them ever will. I think I’d take that future over our present and be quite grateful for the exchange. On the other hand, I can imagine some extremely dark futures where the order of choices is usually increasing for most people, because, e.g., they’re becoming steadily smarter and/or more resilient and they live in a complicated world, but they’re trapped in a kind of grindy hellscape where they have to constantly engage in that sort of long-term planning in order to purchase moderately effective relief from their otherwise constant suffering.
So I’d question whether the order of freedoms is (a) one interesting heuristic that is good to look at when considering possible futures, or (b) actually the definition of what it would mean to win. If it’s (b), I think you have some more explaining to do.