Waking up to reality. No, not that one. We’re still dreaming.
Aleksi Liimatainen
At the extremes, people have one of four life goals: To achieve a state of nothingness (hinayana enlightenment), to achieve a state of oneness (mahayana enlightenment), to achieve a utopia of meaning (galts gulch), or to achieve a utopia of togetherness (hivemind).
These are not distinct things—they’re alternative ways to frame one thing. All roads lead to Rome, so to speak. The way I see it, full enlightenment entails attaining all four at once. Just don’t get distracted by the taste of lotus on the way.
Ontologically distinct enlightenments suggest path dependence. That seems correct on reflection; updating and reframing.
Enlightenment is caused by a certain observation about mind/reality that is salient, obvious in retrospect and reliably triggers major updates. The referent of this observation is universal and invariant but its interpretation and the resulting updates may not be; the mind can only work with what it has.
In other words, enlightenment has one referent in the territory but the resulting maps are path dependent. This seems consistent with what I know about spirituality-related failure modes and doctrinal disagreements. Also, the sixties.
So yeah. Caution is warranted. Just keep in mind that your skull is an information bottleneck, not an ontological boundary.
I noticed I was confused about how humans can learn novel concepts from verbal explanations without running into the symbol grounding problem. After some contemplation, I came up with this:
To the extent language relies on learned associations between linguistic structures and mental content, a verbal explanation can only work with what’s already there. Instead of directly inserting new mental content, the explanation must leverage the receiving mind’s established content in a way that lets the mind generate its own version of the new content.
There’s enough to say about this that it seems worth a post or several but I thought I’d float it here first. Has something like this been written already?
Thanks, this is exactly what I was looking for. Not a new idea then, though there’s something to be said for semi-independent reinvention.
The obvious munchkin move would be to develop a reliable means of boostrapping a basic mental model of constructivist learning and grounding it in the learner’s own direct experience of learning. Turning the learning process on itself should lead to some amount of recursive improvement, right? Has that been tried?
My mind keeps wanting to interpret your take on determinism as a fatalistic fallacy. Let me try to get some clarity on this.
If belief in determinism causes someone to make poorer choices, they’re doing it wrong. Do you agree? If not, why?
Belief in determinism is correlated with worse outcomes, but one doesn’t cause the other; both are determined by the state and process of the universe.
Wait, how does determinism obviate cause and effect? A timeless universe would, but deterministic causation is still causation, right? Not that it matters for the point at hand.
(I’d prefer a better term than “correlated”, there’s still some logical determination going on there. Not sure what to replace it with, though.)
The point is, it doesn’t matter if we live in a deterministic universe. Our values are still best served by pursuing them with our full effort, even if from some omniscient outside perspective the whole thing were predetermined. If modeling ourselves as deterministic would diminish our efforts, we’d be making a mental mistake.
If we do live in a deterministic universe, then free choice is simply what the unfolding of the determination feels from the inside. As far as I can tell, the ontological details don’t make much empirical difference and our intuitions are well-optimized for performance. I think I’ve somehow managed to update for the possibility of a timeless universe on the intuitive level but the difference is so small it’s hard to tell. Feel free to stick with what you have, I guess.
Counterfactually, if your decisions were different, the future-ward implications of those decisions would be different. In that sense, they do have a point.
I wasn’t trying to ask whether one should believe in determinism or not. I was asking what effects belief in determinism should have.
As far as I can tell, these dissonances usually result from an ontological-type mismatch, eg. a free-willed agent judging the choices of a deterministic one. Within-universe, moral choices and moral judgments are of the same ontological type and the dissonance cancels out.
These kinds of moral judgments only make sense between roughly equal agents anyway. If one is so much more capable that it can model the other as basically deterministic, it is better off exerting influence through causal channels.
I know. My claim is that the issue stems from the way our moral intuitions are grounded in the intuitive notion of free will. If we update to a deterministic world-model without updating the intuitions, we get the confusions you describe.
To clarify, deterministic agents rewarding and punishing deterministic agents only seems more problematic than the nondeterministic case because of our nondeterminist intuitions.
1. True. However, as long as we are discussing determinism, it is proper to avoid the intuitive confusion.
2. Help me out here. Which unstated assumptions do you think I have?
I’m engaging in this conversation because you seem to be almost-but-not-quite arguing against determinism in a way that suggests you may be operating from nondeterminist intuitions. I’ve been trying to figure out if that was the case and discussing the philosophy as it comes up.
What are your own assumptions on this? How do your intuitions mesh with the possibility of living in a deterministic universe?
1. The problem with that is, the standard intuitions implicitly assume that determinism is false. It’s hard to have a productive discussion about a counterintuitive topic if the contrary intuitions keep firing.
2. I believe that determinism is plausible but I don’t have strong ontological commitments at this time. Based on the trajectory of the conversation, I suspect you have a strong (implicit or explicit) commitment to nondeterminism. Do you?
Tu quoque, my friend. From my perspective, your reasoning is tacitly relying on intuitive assumptions that don’t apply in the (hypothetical) domain of a deterministic universe. In other words, you’re implicitly assuming your conclusion.
If determinism is true, you’re already a deterministic being in a deterministic universe. I suppose there’s a mental trick of flipping the perspective from the outside to the inside, but that may require taking the hypothesis more seriously than you’re likely to do.
Remember, people like me started out with the intuition of nondeterminism. We’ve already worked through the basic objections about things like choices, consequences and morality. If your wish is to engage productively on this topic, you may want to rethink your approach.
You keep commenting on determinism but given your intuitions, you end up sounding a bit like “Nondeterminism, therefore X.” In reply, people tell you something like “If determinism, then Y.”
This is the important point. If determinism is true, then nondeterminist intuitions are mistaken. What’s the point of participating in discussions of determinism if you keep applying intuitions that take its falsity as an axiom?
If determinism is true and compatibilism is false.
Huh. Going by the wikipedia definition of compatibilism, it seems like a distinction without a difference. How does it help in your view?
I have pointed out what people worry they are going to lose under determinism.
This feels like worrying about losing the colors of the rainbow if optics is true. Maybe add that worry to the list of potentially mistaken intuitions.
Not giving up, updating. The whole point is that determinism (or timelessness for that matter) need not invalidate our notions of agency, consequence or morality. If it feels like it does, that’s a bug in the system.
Okay, let me try to put it this way.
Imagine someone giving you orders while holding a gun to your head. That situation feels distinctly unfree, even though you’re entirely free to disobey and take the bullet to the head.
Our intuitive sense of freedom may actually refer to a lack of externally imposed constraints on our decision process, as opposed to some inherent internal quality. The mistake would then be imagining determinism as an external imposition when it would in fact be a quality of the decision process itself.
Does that help?
It seems fairly likely to me proto-AGI (i.e. AI that could autonomously learn to become AGI within <~10yrs of acting in the real world) is deployed and creates proto-AGI subagents, some of which we don’t become aware of (e.g. because accidental/incidental/deliberate steganography) and/or are unable to keep track of. And then those continue to survive and reproduce, etc…
Now I’m wondering if it makes sense to model past or present cognitive-cultural information processes in a similar fashion. Memetic and cultural evolutions are a thing and any agentlike processes that spawn could piggypack on our existing general intelligence architecture.
Are you sure that isn’t the same type of confusion? The way your decision process goes does make a difference to the outcomes of the universe. Again, being predictable-in-principle is a property of the process, not an external imposition.
You consider a number of choices. You judge them according to your decision criteria and choose the one that seems best. What difference does it make if some hypothetical omniscient observer could tell in advance which choice you’ll make? You’ll still choose just one, and you want it to be the best one.
In what sense is the unchosen counterfactual a real one?
The SSC sequence (plus a whole bunch of other things) inspired me to think of deities as mythic representations of cultural collective intelligence. The God-shaped hole could then be understood as a psychological adaptation for collective intelligence, and religions as collective intelligence operating systems.
There’s a lot more that could be said on this topic but it seems to deserve its own sequence. Perhaps I should write one.