Scope sensitivity and the cosmic endowment. I definitely feel like looking at the stars reminds me of how much stuff there is to optimize, which sure feels pretty related to rationality.
I’m surprised that no one has mentioned the thing that feels most rationalist-y about SpaceX, which is the exercise of agency against civilizational inadequacy. Elon Musk looked at space travel and was like, ‘that seems inadequate, I bet I could do it better.’ And everyone said, ‘you’re crazy, that’s impossible.’ And Elon Musk didn’t listen to them and now SpaceX is a leader in space flight.
I guess the key word here might be “the Art of Rationality, as practiced on LessWrong”. I do somewhat resonate with what you describe, but that feels more associated with a specific set of values that’s predominant on LW due to a founder effect, rather than an integral part of rationality. So someone could still be rational in the sense that LW conceives rationality, without sharing the values implied by the concept of a cosmic endowment.
(That said, I’m cool with this thread being about that particular aesthetic, rather than rigorously just the art of rationality.)
Hmm, definitely feels core to the art of rationality to me. Like, convergent instrumental goals apply to humans as well. Understanding that just feels straightforwardly useful for the generalized art of rationality.
They certainly apply, but the formulation of the instrumental convergence thesis is very general, e.g. as stated in Bostrom’s paper:
Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by many intelligent agents.
That only states that those instrumental values are likely to be pursued by many agents to some extent, depending on how useful they are for fulfilling the ultimate values of the agents. But there’s nothing to say that it would be particularly useful for the goals of most humans to pursue them to the point of e.g. advancing space colonization.
[...] specific set of values that’s predominant on LW due to a founder effect [...] this thread being about that particular aesthetic [...]
Noticing that there’s lots of matter to do something with is not an aesthetic, it’s awareness of a basic drive. While it’s technically possible to have a preference that doesn’t value things that can be made out of galaxies, it would be shocking if there is a statistically significant number of humans whose correct idealization has that property. I mean, even a paperclip maximizer is not like that.
I think it’s important to avoid mixing up the question of values about what should actually happen with the world, and the question of what seems aesthetically pleasing. What habryka referenced and you seem to be responding to are (salient ideas associated with) actual values. But this post’s rules state that it’s something real that should have power over selection of art, not just aesthetic preference, which makes habryka’s appeal to values relevant, where it would be a much weaker argument if we were only discussing aesthetic preference.
While it’s technically possible to have a preference that doesn’t value things that can be made out of galaxies, it would be shocking if there is a statistically significant number of humans whose correct idealization has that property.
I have pretty broad uncertainty on whether “people’s correct idealization” is a useful concept in this kind of a context, and assuming that it is, what those idealizations would value—seems to me like they might incorporate a fair amount of path dependence, with different equally correct idealizations arriving at completely different ultimate outcomes.
which makes habryka’s appeal to values relevant, where it would be a much weaker argument if we were only discussing aesthetic preference.
I have pretty broad uncertainty on whether “people’s correct idealization” is a useful concept in this kind of a context, and assuming that it is, what those idealizations would value [...]
Understanding of a concept shouldn’t directly depend on whether it’s useful, so I think it’s an error to entertain the assumption of usefulness. (What use were you considering? Maybe it is relevant in a way I don’t see?)
Here, it doesn’t matter what stuff people would value (so it isn’t relevant that different people value different things or that there is a lot of uncertainty about what people value). The question is whether the total value of the-most-valuable-to-a-given-person stuff, whatever that is, made out of reachable matter is significant (compared to what can be made out of Earth). Do you mean that it’s plausible that for a lot of people it isn’t actually significant?
That’s the question I implicitly posed in the grandparent. It’s not clear from your response what you think about it. A point I would agree with is that the question is too vague to have a robust understanding of it, so heuristically it makes sense to only entertain some related considerations while holding off on articulating an answer (in the same spirit as for most stuff pundits are wont to irresponsibly opine about).
Not that I would have anything against nice space exploration -themed imagery, but what makes that particularly connected to the art of rationality?
(I really like this post in general though, strong-upvoted.)
Scope sensitivity and the cosmic endowment. I definitely feel like looking at the stars reminds me of how much stuff there is to optimize, which sure feels pretty related to rationality.
I’m surprised that no one has mentioned the thing that feels most rationalist-y about SpaceX, which is the exercise of agency against civilizational inadequacy. Elon Musk looked at space travel and was like, ‘that seems inadequate, I bet I could do it better.’ And everyone said, ‘you’re crazy, that’s impossible.’ And Elon Musk didn’t listen to them and now SpaceX is a leader in space flight.
I guess the key word here might be “the Art of Rationality, as practiced on LessWrong”. I do somewhat resonate with what you describe, but that feels more associated with a specific set of values that’s predominant on LW due to a founder effect, rather than an integral part of rationality. So someone could still be rational in the sense that LW conceives rationality, without sharing the values implied by the concept of a cosmic endowment.
(That said, I’m cool with this thread being about that particular aesthetic, rather than rigorously just the art of rationality.)
Hmm, definitely feels core to the art of rationality to me. Like, convergent instrumental goals apply to humans as well. Understanding that just feels straightforwardly useful for the generalized art of rationality.
They certainly apply, but the formulation of the instrumental convergence thesis is very general, e.g. as stated in Bostrom’s paper:
That only states that those instrumental values are likely to be pursued by many agents to some extent, depending on how useful they are for fulfilling the ultimate values of the agents. But there’s nothing to say that it would be particularly useful for the goals of most humans to pursue them to the point of e.g. advancing space colonization.
Noticing that there’s lots of matter to do something with is not an aesthetic, it’s awareness of a basic drive. While it’s technically possible to have a preference that doesn’t value things that can be made out of galaxies, it would be shocking if there is a statistically significant number of humans whose correct idealization has that property. I mean, even a paperclip maximizer is not like that.
I think it’s important to avoid mixing up the question of values about what should actually happen with the world, and the question of what seems aesthetically pleasing. What habryka referenced and you seem to be responding to are (salient ideas associated with) actual values. But this post’s rules state that it’s something real that should have power over selection of art, not just aesthetic preference, which makes habryka’s appeal to values relevant, where it would be a much weaker argument if we were only discussing aesthetic preference.
I have pretty broad uncertainty on whether “people’s correct idealization” is a useful concept in this kind of a context, and assuming that it is, what those idealizations would value—seems to me like they might incorporate a fair amount of path dependence, with different equally correct idealizations arriving at completely different ultimate outcomes.
I tend to think that (like identities) aesthetics are something like cached judgements which combine values and strategies for achieving those values.
Understanding of a concept shouldn’t directly depend on whether it’s useful, so I think it’s an error to entertain the assumption of usefulness. (What use were you considering? Maybe it is relevant in a way I don’t see?)
Here, it doesn’t matter what stuff people would value (so it isn’t relevant that different people value different things or that there is a lot of uncertainty about what people value). The question is whether the total value of the-most-valuable-to-a-given-person stuff, whatever that is, made out of reachable matter is significant (compared to what can be made out of Earth). Do you mean that it’s plausible that for a lot of people it isn’t actually significant?
That’s the question I implicitly posed in the grandparent. It’s not clear from your response what you think about it. A point I would agree with is that the question is too vague to have a robust understanding of it, so heuristically it makes sense to only entertain some related considerations while holding off on articulating an answer (in the same spirit as for most stuff pundits are wont to irresponsibly opine about).