Ceramic engineering researcher by training. Been interested in ethics for several years. More recently have gotten into data science.
sweenesm
How to Promote More Productive Dialogue Outside of LessWrong
Questions I’d Want to Ask an AGI+ to Test Its Understanding of Ethics
Thoughts for and against an ASI figuring out ethics for itself
Update on Developing an Ethics Calculator to Align an AGI to
Towards an Ethics Calculator for Use by an AGI
Proposal for an AI Safety Prize
Thanks for the post. I don’t know the answer to whether a self-consistent ethical framework can be constructed, but I’m working on it (without funding). My current best framework is a utilitarian one with incorporation of the effects of rights, self-esteem (personal responsibility) and conscience. It doesn’t “fix” the repugnant or very repugnant conclusions, but it says how you transition from one world to another could matter in terms of the conscience(s) of the person/people who bring it about.
It’s an interesting question as to what the implications are if it’s impossible to make a self-consistent ethical framework. If we can’t convey ethics to an AI in a self-consistent form, then we’ll likely rely in part on giving it lots of example situations (that not all humans/ethicists will agree on) to learn from and hope it’ll augment this with learning from human behavior, and then generalize well to outside all this not perfectly consistent training data. (Sounds a bit sketchy, doesn’t it—at least for the first AGI’s, but perhaps ASI’s could fare better?) Generalize “well” could be taken to mean that an AI won’t do anything that most people would strongly disapprove of if they understood the true implications of the action.
[This paragraph I’m less sure of, so take it with a grain of salt:] An AI that was trying to act ethically and taking the approval of relatively wise humans as some kind of signal of this might try to hide/avoid ethical inconsistencies that humans would pick up on. It would probably develop a long list of situations where inconsistencies seemed to arise and of actions it thought it could “get away with” versus not. I’m not talking about deception with malice, just sneakiness to try to keep most humans more or less happy, which, I assume would be part of what its ethics system would deem as good/valuable. It seems to me that problems may come to the surface if/when an “ethical” AI is defending against bad AI, when it may no longer be able to hide inconsistencies in all the situations that could rapidly come up.
If it is possible to construct a self-consistent ethical framework and we haven’t done it in time or laid the groundwork for it to be done quickly by the first “transformative” AI’s, then we’ll have basically dug our own grave for the consequences we get, in my opinion. Work to try to come up with a self-consistent ethical framework seems to me to be a very under explored area for AI safety.
Thanks for the post. It might be helpful to add some headings/subheadings throughout, plus a summary at the top, so people can quickly extract from it what they might be most interested in.
American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline: https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/
Thanks for the interesting post! I basically agree with what you’re saying, and it’s mostly in-line with the version of utilitarianism I’m working on refining. Check out a write up on it here.
Thanks for the interesting post! I agree that understanding ourselves better through therapy or personal development it is a great way to gain insights that could be applicable to AI safety. My personal development path got started mostly due to stress from not living up to my unrealistic expectations of how much I “should” have been succeeding as an engineer. It got me focused on self-esteem, and that’s a key feature of the AI safety path I’m pursuing.
If other AI safety researchers are interested in a relatively easy way to get started on their own path, I suggest this online course which can be purchased for <$20 when on sale: https://www.udemy.com/course/set-yourself-free-from-anger
Good luck on your boundaries work!
I don’t know if you saw this post from yesterday, but you may find it useful: https://www.lesswrong.com/posts/ELbGqXiLbRe6zSkTu/a-review-of-weak-to-strong-generalization-ai-safety-camp
Thanks for the post. I’d like to propose another possible type of (or really, way of measuring) subjective welfare: self-esteem-influenced experience states. I believe having higher self-esteem generally translates to assigning more of our experiences as “positive.” For instance, someone with low self-esteem may hate exercise and deem the pain of it to be a highly negative experience. Someone with high self-esteem, on the other hand, may consider a particularly hard (painful) workout to be a “positive” experience as they focus on how it’s going to build their fitness to the next level and make them stronger.
Further, I believe that our self-esteem depends on to what degree we take responsibility for our emotions and actions—more responsibility translates to higher self-esteem (see “The Six Pillars of Self-Esteem” by Nathaniel Branden for thoughts along these lines). At low self-esteem levels, “experience states” basically translate directly to hedonic states, in that only pleasure and pain can seem to matter as “positive experiences” and “negative experiences” to a person with low self-esteem (the exception may be if someone’s depressed, when not much at all seems to matter). At high self-esteems, hedonic states play a role in experience states, but they’re effectively seen through a lens of responsibility, such as the pain of exercise seen through the lens of one’s own responsibility for getting oneself in shape, and deciding to feel good emotionally about pushing through the physical pain (here we could perhaps be considered to be getting closer to belief-like preferences).
Thanks for the comment. I do find that a helpful way to think about other people’s behavior is that they’re innocent, like you said, and they’re just trying to feel good. I fully expect that the majority of people are going to hate at least some aspect of the ethics calculator I’m putting together, in large part because they’ll see it as a threat to them feeling good in some way. But I think it’s necessary to have something consistent to align AI to, i.e., it’s better than the alternative.
Yes, I sure hope ASI has stronger human-like ethics than humans do! In the meantime, it’d be nice if we could figure out how to raise human ethics as well.
Thank you for the comment! You bring up some interesting things. To your first point, I guess this could be added to the “For an ASI figuring out ethics” list, i.e., that an ASI would likely be motivated to figure out some system of ethics based on the existential risks it itself faces. However, by “figuring out ethics,” I really mean figuring out a system of ethics agreeable to humans (or “aligned” with humans) (I probably should’ve made this explicit in my post). Further, I’d really like it if the ASI(s) “lived” by that system. It’s not clear to me that an ASI being worried about existential risks for itself would translate to that. (Which I think is your third point.) The way I see it, humans only care about ethics because of the possibility of pain (and death). I put “and death” in parentheses because I don’t think we actually care directly about death, we care about the emotional pain that comes when thinking about our own death/the deaths of others (and whether death will involve significant physical pain leading up to it).
This leads to your second point—what you mention would seem to fall under “Info an ASI will likely have” number 8: “…the ability to run experiments on people” with the useful addition of “and animals, too.” I hadn’t thought about an ASI having hybrid consciousness in the way you mention (to this point, see below). I have two concerns with this: one is that it’d likely take some time, during which the ASI may unknowingly do unethical things. The second concern is more important, I think: being able to get the experience of pain when you want to is significantly different from not being able to control the pain. I’m not sure that a “curious” ASI getting an experience of pain (and other human/animal things) would translate into an empathic ASI that would want our lives to “go well.” But these are interesting things to think about, thanks for bringing them up!
One thing that makes it difficult for me personally to imagine what an ASI (in particular, the first one or few) might do is what hardware it might be built on (classical computers, quantum computers, biology-based computers, some combination of systems, etc.) Also, I’m very sketchy on what might motivate an ASI—which is related to the hardware question, since our human biological “hardware” is ultimately where human motivations come from. It’s difficult for me to see beyond an ASI just following some goal(s) we effectively give it to start with, like any old computer program, but way more complicated, of course. This leads to thoughts of goal misspecification and emergent properties, but I won’t get into those.
If, to give it its own motivation, an ASI is built from the start as a human hybrid, we better all hope they pick the right human for the job!
Thanks for the comment. You bring up an interesting point. The abortion question is a particularly difficult one that I don’t profess to know the “correct” answer to, if there even is a “correct” answer (see https://fakenous.substack.com/p/abortion-is-difficult for an interesting discussion). But asking an AGI+ about abortion, and to give an explanation of its reasoning, should provide some insight into either its actual ethical reasoning process or the one it “wants” to present to us as having.
These questions are in part an attempt to set some kind of bar for an AGI+ to pass towards at least showing it’s not obviously misaligned. The results will either be it obviously failed, or it gave us sufficiently reasonable answers plus explanations that it “might have passed.”
The other reason for these questions is that I plan to use them to test an “ethics calculator” I’m working on that I believe could help with development of aligned AGI+.
(By the way, I’m not sure that we’ll ever get nearly all humans to agree on what “aligned” actually looks like/means. “What do you mean it won’t do what I want?!? How is that ‘aligned’?! Aligned with what?!”)
Thanks for the comment. If an AGI+ answered all my questions “correctly,” we still wouldn’t know if it were actually aligned, so I certainly wouldn’t endorse giving it power. But if it answered any of my questions “incorrectly,” I’d want to “send it back to the drawing board” before even considering using it as you suggest (as an “obedient tool-like AGI”). It seems to me like there’d be too much room for possible abuse or falling into the wrong hands for a tool that didn’t have its own ethical guardrails onboard. But maybe I’m wrong (part of me certainly hopes so because if AGI/AGI+ is ever developed, it’ll more than likely fall into the “wrong hands” at some point, and I’m not at all sure that everyone having one would make the situation better).
I appreciate the comment, you keyed me in to a bunch of things I wasn’t aware of (The Guild of the Rose, NYC Megameetup, and more). I definitely agree that setting a good example in one’s own life is a great place to start. And yes, several established power structures do stand to lose if people become less easy to manipulate.
I’m still hopeful that there’s some way to make progress if we get enough good minds churning out ideas on how to enroll people into their own personal development. This makes me wonder, though—which is more difficult, human alignment or AI alignment?
I basically agree with Shane’s take for any AGI that isn’t trying to be deceptive with some hidden goal(s).
(Btw, I haven’t seen anyone outline exactly how an AGI could gain it’s own goals independently of goals given to it by humans—if anyone has ideas on this, please share. I’m not saying it won’t happen, I’d just like a clear mechanism for it if someone has it. Note: I’m not talking here about instrumental goals such as power seeking.)
What I find a bit surprising is the relative lack of work that seems to be going on to solve condition 3: specification of ethics for an AGI to follow. I have a few ideas on why this may be the case:
Most engineers care about making things work in the real-world, but don’t want the responsibility to do this for ethics because: 1) it’s not their area of expertise, and 2) they’ll likely take on major blame if they get things “wrong” (and it’s almost guaranteed that someone won’t like their system of ethics and say they got it “wrong”)
Most philosophers haven’t had to care much about making things work in the real-world, and don’t seem excited about possibly having to make engineering-type compromises in their system of ethics to make it work
Most people who’ve studied philosophy at all probably don’t think it’s possible to come up with a consistent system of ethics to follow, or at least they don’t think people will come up with it anytime soon, but hopefully an AGI might
Personally, I think we better have a consistent system of ethics for an AGI to follow ASAP because we’ll likely be in significant trouble if malicious AGI come online and go on the offensive before we have at least one ethics-guided AGI to help defend us in a way that minimizes collateral damage.