I appreciate these impressionistic samples from the totality of the research community. It makes sense that Anthropic is adjacent to Oxford (e.g. since Anthropic is EA-adjacent) while Deep Mind is adjacent to Berkeley (more rooted, like Google, in traditional comp-sci academia?). Interesting that OpenAI stands apart from them—does it in any way derive from OpenAI having been the independent R&D powerhouse that first challenged Deep Mind’s hegemony?
It would be interesting to see similar representations for “capabilities research”, since that is what’s driving us over the precipice.
I think ethics is an emergent rational order imposed upon impulses that arise from expectations of pain and pleasure, emotions like empathy and moral disgust. and perhaps other sources. It will be hard to know whether there is one true ethics as the natural convergent ideal, without first understanding morality as a phenomenon of consciousness, including how it relates to non-moral factors in human decision making.
I think Schopenhauer or Nietzsche at one point lists saint, artist, and scholar as three distinct human ideals. Note that only the saint is governed by morality; the artist is ruled by aesthetics, and the scholar, we could say, by epistemology. I do suspect that moral, aesthetic, and epistemic normativity are all distinct factors in human psychology (and this is not meant to be an exhaustive list), and that understanding the “human decision procedure” requires identifying their psychological and ontological roots (e.g. how they emerge in an ontology like @algekalipso’s “process-topological monads”), and that this kind of understanding is necessary to obtain the correct metaethics, and to carry out a process like Coherent Extrapolated Volition.
Also, there is probably a different decision-theoretic ontology for entities like non-conscious AIs, in which the basic cause and effect of “decision making” doesn’t even involve qualia, emotion, or conscious thought.