I feel like “Something is good to the extent that an idealized version of me would judge it good” is a useful heuristic about goodness, but I agree that it doesn’t really work as a definition and I liked this post.
It seems like an important heuristic if we are in a bad position to figure out what is good directly (e.g. because we are spending our time fending off catastrophe or competing with each other), where it feels possible to construct an idealization that we’d trust more than ourselves (e.g. by removing the risk of extinction or some kinds of destructive conflict).
In particular, it seems we could (often) trust them to figure out how to perform further idealization better than we would. We don’t want to pick just any self-ratifying idealization, but we can hope to get around this by taking little baby steps each of which ratifies the next. The very simple version of this heuristic quickly loses its value once we take a few steps and fix the most obviously broken+urgent things about our situation. Then we are left with hard questions about which kinds of idealizations are “best,” and then eventually with hard object-level questions.
(I do think any of these calls, even the apparently simple ones, is value-laden. For example, it seems like a human is not a single coherent entity, and different processes of idealization could lead to different balances between conflicting desires or ways of being. This kind of problem is more obvious for groups than individuals, since it’s clear from everyday life how early steps of “idealization” can fundamentally change the balance of power, but I think it’s also quite important for incoherent individuals. Not to mention more mundane forms of wrongness, that seem possible even for the simplest kinds of idealization and mean that no idealization is really a free lunch.)
I wrote about something a bit like your “ghost civilization” here (under “Finding Earth” and then “Extrapolation”).
I agree that it’s a useful heuristic, and the “baby steps” idealization you describe seems to me like a reasonable version to have in mind and to defer to over ourselves (including re: how to continue idealizing). I also appreciate that your 2012 post actually went through sketched a process in that amount of depth/specificity.
I feel like “Something is good to the extent that an idealized version of me would judge it good” is a useful heuristic about goodness, but I agree that it doesn’t really work as a definition and I liked this post.
It seems like an important heuristic if we are in a bad position to figure out what is good directly (e.g. because we are spending our time fending off catastrophe or competing with each other), where it feels possible to construct an idealization that we’d trust more than ourselves (e.g. by removing the risk of extinction or some kinds of destructive conflict).
In particular, it seems we could (often) trust them to figure out how to perform further idealization better than we would. We don’t want to pick just any self-ratifying idealization, but we can hope to get around this by taking little baby steps each of which ratifies the next. The very simple version of this heuristic quickly loses its value once we take a few steps and fix the most obviously broken+urgent things about our situation. Then we are left with hard questions about which kinds of idealizations are “best,” and then eventually with hard object-level questions.
(I do think any of these calls, even the apparently simple ones, is value-laden. For example, it seems like a human is not a single coherent entity, and different processes of idealization could lead to different balances between conflicting desires or ways of being. This kind of problem is more obvious for groups than individuals, since it’s clear from everyday life how early steps of “idealization” can fundamentally change the balance of power, but I think it’s also quite important for incoherent individuals. Not to mention more mundane forms of wrongness, that seem possible even for the simplest kinds of idealization and mean that no idealization is really a free lunch.)
I wrote about something a bit like your “ghost civilization” here (under “Finding Earth” and then “Extrapolation”).
I agree that it’s a useful heuristic, and the “baby steps” idealization you describe seems to me like a reasonable version to have in mind and to defer to over ourselves (including re: how to continue idealizing). I also appreciate that your 2012 post actually went through sketched a process in that amount of depth/specificity.