This post makes a brave attempt to clarify something not easy to point to, and ends up somewhere between LessWrong-style analysis and almost continental philosophy, sometimes pointing toward things beyond the reach of words with poetry—or at least references to poetry.
In my view, it succeeds in its central quest: creating a short handle for something subtle and not easily legible.
The essay also touches on many tangential ideas. Re-reading it after two years, I’m noticing I’ve forgotten almost all the details and found the text surprisingly long. The handle itself, though, stuck.
Evaluating deep atheism
Having the handle of “deep atheism”, some natural questions—partially discussed in the text—are “is deep atheism right”, “should people believe deep atheism” and “should people Believe In deep atheism”.
My current guess is evaluating the truthfulness of “deep atheism” is likely at or beyond limits to legibility. Human values are not really representable as legible reasoning, complex priors about the general nature of reality are also not really representable by complex reasoning, and the neural substrate is not transferable between brains. “The justification engine”—or a competent philosopher or persuasive writer—can create stories or arguments pushing one way or another, but I’m somewhat sceptical the epistemic structure really rests on the arguments.
I’m not in favour of ordinary mortals trying to “Believe In deep atheism” and would not expect that to lead to good consequencdes.
Moral realism
The section I like the least is “Are moral realists theists?” I don’t think “Good just sits outside of Nature, totally inaccessible, and we guess wildly about him on the basis of the intuitions that Nature put into our heart” represents the strongest version of moral realism.
My preferred versions of quasi-moral-realism give moral claims a status similar to mathematics. Do Real numbers sit outside Nature, totally inaccessible? I’d say no. Would aliens use them? That’s an empirical question about convergent evolution of abstractions. I’d be surprised if any advanced reasoner in this universe didn’t use something equivalent to natural numbers. For Reals, I’d guess it’s easy to avoid Zermelo–Fraenkel set theory specifically, but highly convergent to develop something like a number line.
What does this tell us about Good? You can imagine something like the process described in Acausal Normalcy leads to some convergent moral fixed points. (Does that solve AI risk? No.)
I wish more people tried to do something “between LessWrong-style analysis and almost continental philosophy”.
This post makes a brave attempt to clarify something not easy to point to, and ends up somewhere between LessWrong-style analysis and almost continental philosophy, sometimes pointing toward things beyond the reach of words with poetry—or at least references to poetry.
In my view, it succeeds in its central quest: creating a short handle for something subtle and not easily legible.
The essay also touches on many tangential ideas. Re-reading it after two years, I’m noticing I’ve forgotten almost all the details and found the text surprisingly long. The handle itself, though, stuck.
Evaluating deep atheism
Having the handle of “deep atheism”, some natural questions—partially discussed in the text—are “is deep atheism right”, “should people believe deep atheism” and “should people Believe In deep atheism”.
My current guess is evaluating the truthfulness of “deep atheism” is likely at or beyond limits to legibility. Human values are not really representable as legible reasoning, complex priors about the general nature of reality are also not really representable by complex reasoning, and the neural substrate is not transferable between brains. “The justification engine”—or a competent philosopher or persuasive writer—can create stories or arguments pushing one way or another, but I’m somewhat sceptical the epistemic structure really rests on the arguments.
I’m not in favour of ordinary mortals trying to “Believe In deep atheism” and would not expect that to lead to good consequencdes.
Moral realism
The section I like the least is “Are moral realists theists?” I don’t think “Good just sits outside of Nature, totally inaccessible, and we guess wildly about him on the basis of the intuitions that Nature put into our heart” represents the strongest version of moral realism.
My preferred versions of quasi-moral-realism give moral claims a status similar to mathematics. Do Real numbers sit outside Nature, totally inaccessible? I’d say no. Would aliens use them? That’s an empirical question about convergent evolution of abstractions. I’d be surprised if any advanced reasoner in this universe didn’t use something equivalent to natural numbers. For Reals, I’d guess it’s easy to avoid Zermelo–Fraenkel set theory specifically, but highly convergent to develop something like a number line.
What does this tell us about Good? You can imagine something like the process described in Acausal Normalcy leads to some convergent moral fixed points. (Does that solve AI risk? No.)
I wish more people tried to do something “between LessWrong-style analysis and almost continental philosophy”.