Is this a fair summary?
The answer to the clever meta-moral question, “But why should we care about morality?” is just “Because when we say morality, we refer to that-which-we-care-about—and, not to belabor the point, but we care about what we care about. Whatever you think you care about, which isn’t morality, I’m calling that morality also. Precisely which things are moral and which are not is a difficult question—but there is no non-trivial meta-question.”
I don’t understand what’s being claimed here, and feel the urge to get off the boat at this point without knowing more. Most stuff we care about isn’t about 3-second reactions, but about >5 minute reactions. Those require thinking, and maybe require non-electrical changes—synaptic plasticity, as you mention. If they do require non-electrical changes, then this reasoning doesn’t go through, right? If we make a thing that simulates the electrical circuitry but doesn’t simulate synaptic plasticity, we’d expect to get… I don’t know, maybe a thing that can perform tasks that are already “compiled into low-level code”, so to speak, but not tasks that require thinking? Is the claim that thinking doesn’t require such changes, or that some thinking doesn’t require such changes, and that subset of thinking is enough for greatly decreasing X-risk?