I spend most of my time promoting psychologically safe human interactions through Emotional Fitness Peer Coaching. I do this through one-on-one sessions, leading and participating in group sessions and full-day workshops with both managers and people with chronic conditions — and sometimes giving lectures and keynotes on listening and coaching skills, as well as how bias shapes our actions in ways we are often unaware of.
I am also a PhD student in the History of Science & Ideas who realized 30+ years ago that my dissertation would likely not be very impactful and that I needed to see more of society than I had at the time—and therefore left my dissertation on the meaning of Albert Schweitzer (1875–1965) as a moral example to his admirers. When I returned to my unfinished research some 25 years later, I found the topic surprisingly coherent with the things I had been working on in completely different domains (leadership, gender, and diversity) in ways that are worthy their own posts.
Later, I also realized that Albert Schweitzer’s once highly influential thinking on Reverence for Life was, in fact, an attempt to solve something that seems remarkably relevant to today’s AI alignment endeavors—a universal summary of the ethics of all major religions.
Schweitzer lived long enough to be both a global moral superstar and to be seen as irrelevant and to be rightly criticized for many of his attitudes, thoughts and choices. That criticism doesn’t undo everything he thought and achieved—and to me it seems there may be guidance for AI alignment in how he thought about protecting life.
I am new to LW and would like to introduce myself.
I came here to learn more about AI Alignment discussions. I’m especially interested in the perspective that the specification for AI alignment may contain a existential-level systematic error. To me, originally a historian of science and ideas, aligning AI with human preferences does not seem wise. Historically we can see that human preferences, due to biases, shortsightedness and social dynamics, can be quite harmful, not only to other species, but also to humans and civilizational continuity.
In the 90′s, I left my dissertation on Albert Schweitzer (1875–1965) because I wanted to make an impact in other parts of society. When AI became a big thing, I realized that Albert Schweitzer’s once highly influential thinking on Reverence for Life (Nobel Peace Prize for 1952) was, in fact, an attempt to solve something that seems remarkably relevant to today’s AI alignment endeavors—a universal summary of the ethics of all major religions.
Schweitzer tried to solve the ethical dilemma of life: that promoting life means we have to harm life (the predator-prey dynamic for instance). I think this dilemma has a direct analog in alignment that I’d like to explore in a future post.
Schweitzer had a very Christian-based solution to that dilemma: feel the moral weight of guilt (German Schuld) every time you eat, or every time he as a medical doctor killed bacteriae to save a human being.
I don’t believe in guilt, but I do believe in responsibility, and I am exploring ways to translate the concept of guilt/debt into principles and mathematical frameworks that could help align AI.
PS. Schweitzer lived long enough to be both a global moral superstar and to be seen as irrelevant and to be rightly criticized for many of his attitudes, thoughts and choices. That criticism doesn’t undo everything he thought and achieved.