Can you expand on this point:
“If we do build an AGI, its actions will determine what is done with the universe.
If the first such AGI we build turns out to be an unfriendly AI that is optimizing for something other than humans and human values, all value in the universe will be destroyed. We are made of atoms that could be used for something else.”
Especially in the sense of why the first unfriendly AI we built will immediately be uncontainable and surpass current human civilization’s ability to destroy it?
This was interesting, and I could see how I would fit myself into these categories. However, I question whether this is mutually-exclusive/collectively-exhaustive of all personality motivations. While it might work well for people who are rational and who strive to be consistent in their actions—I know plenty of people swap the principles they seem to act on depending on the situation. To use the hogwarts examples, they would switch from one house to the next depending on their mood.
And I can think of at least one type of motivation which none of the houses seem to cover—which is pure interest in the work itself—i.e. the hermit savant who doesn’t care for any meta/epistemological system (ravenclaw), nor do they have any type of moral or personal convictions (gryffindor), nor do they care about others (hufflepuff) or themselves (slytherin). They simply care about the work or thing that they are fixated on.