all the cognitive information processing
I don’t understand what’s being claimed here, and feel the urge to get off the boat at this point without knowing more. Most stuff we care about isn’t about 3-second reactions, but about >5 minute reactions. Those require thinking, and maybe require non-electrical changes—synaptic plasticity, as you mention. If they do require non-electrical changes, then this reasoning doesn’t go through, right? If we make a thing that simulates the electrical circuitry but doesn’t simulate synaptic plasticity, we’d expect to get… I don’t know, maybe a thing that can perform tasks that are already “compiled into low-level code”, so to speak, but not tasks that require thinking? Is the claim that thinking doesn’t require such changes, or that some thinking doesn’t require such changes, and that subset of thinking is enough for greatly decreasing X-risk?
I appreciate this clarification, but I think it’s not enough. As the most defensible counterexample, theoretical math is quintessentially technical, whether or not it relates to (non-mental) experimentation. A less defensible but more important counterexample is (careful, speculative, motivated, core) philosophy. An alternative name for what you mean here could be “prosaic”. See e.g. https://www.lesswrong.com/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment :
If “prosaic” sounds derogatory, another alternative would be “in-/on-paradigm”.
All young people and other newcomers should be made aware that on-paradigm AI safety/alignment—while being more tractable, feedbacked, well-resourced, and populated compared to theory—is also inevitably streetlighting https://en.wikipedia.org/wiki/Streetlight_effect.