Beware the Nihilistic Failure Mode

I have noticed that the term ‘nihilism’ has quite a few different connotations. I do not know that it is a coincidence. Reputedly, the most popular connotation, and in my opinion, the least well-defined, is existential nihilism, ‘the philosophical theory that life has no intrinsic meaning or value.’ I think that most LessWrong users would agree that there is no intrinsic meaning or value, but also that they would argue that there is a contingent meaning or value, and that the absence of such intrinsic meaning or value is no justification to be a generally insufferable person.

There is also the slightly similar but perhaps more well-defined moral nihilism; epistemological nihilism; and the not-unrelated fatalism.

Here, it goes without saying that each of these positions is wrong.

I recognize a pattern here. It seems that in each case the person who arrives at each of these positions has, in some informal sense, given up.
The idea finally came to my explicit attention after reading a passage in Nick Bostrom’s Technological Revolutions: Ethics and Policy in the Dark. Bostrom writes:
If we want to make sense of the claim that physics is better at predicting than social science is, we have to work harder to explicate what it might mean. One possible way of explicating the claim is that when one says that physics is better at predicting than social science one might mean that experts in physics have a greater advantage over non‐experts in predicting interesting things in the domain of physics than experts in social science have over non‐experts in predicting interesting things in the domain of social science. This is still very imprecise since it relies on an undefined concept of “interesting things”. Yet the explication does at least draw attention to one aspect of the idea of predictability that is relevant in the context of public policy, namely the extent to which research and expertise can improve our ability to predict. The usefulness of ELSI‐funded activities might depend not on the absolute obtainable degree of predictability of technological innovation and social outcomes but on how much improvement in predictive ability these activities will produce. Let us hence set aside the following unhelpful question:
“Is the future of science or technological innovation predictable?”
A better question would be,
“How predictable are various aspects of the future of science or technological innovation?”
But often, we will get more mileage out of asking,
“How much more predictable can (a certain aspect of) the future of science or technological
innovations become if we devote a certain amount of resources to study it?”
Or better still:
“Which particular inquiries would do most to improve our ability to predict those aspects of the future of S&T that we most need to know about in advance?”
Pursuit of this question could lead us to explore many interesting avenues of research which might result in improved means of obtaining foresight about S&T developments and their policy consequences.
Crow and Sarewitz, however, wishing to side‐step the question about predictability, claim that it is “irrelevant”:
“preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner.”
This answer is too quick. Each of the elements they mention as required for the preparation for the future relies in some way on accurate prediction. A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations. Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next. It also requires prediction to figure out what kind of institutions will prove healthy, resilient, and effective in responding or adapting to future changes. Predicting the future quality and behavior of institutions that we create today is not an exact science.
This is about quick answers. The One True Morality is not written in the atoms, but it is also a mistake to conclude that we may value whatever we postulate. We cannot know things certainly, but it is also a mistake to conclude that we can know nothing as a consequence. The universe is fundamentally deterministic, but it is also a mistake to conclude that we should take the null action in every case.
I think that in each case where a person has arrived at one of these positions, it is not as the result of a verbal, deductive argument, but rather, it is a verbalization of a wordless feeling of difficulty, an expression of one’s attitude that the confusion surrounding morality, epistemology, and free will is intractable.
It has already been said that one should be suspicious of ordinary solutions to impossible problems. But I do think that the point that I have made above has been overlooked as a special case. Sometimes, something even less than an ordinary solution is proposed. Sometimes, it is proposed that there is no solution.
These points are obvious to most LessWrong users, but the general experience is perhaps worth distinguishing. Where you encounter a difficult problem (of either an instrumentally or epistemically rational nature, I might add), beware a feeling of futility, or a compulsion to inform others that their actions are futile.
This is also perhaps similar to the idea of a wrong question. I would argue that even if one has a verbal, propositional belief that confusion exists in the map and not the territory, it is easy to be dissuaded by feelings of difficulty without noticing, and perhaps it is worth learning to notice a feeling of difficulty in itself, the sort of behavior that it inspires, and the danger therewith.