Why hacker mindset and moral alignment would save the world, and why I believe they’re possible

So, I’m tired of not posting because I’m not sure what most people on LessWrong would think, and if everyone already knows what I’m saying (since to be fair almost every world-optimization-related topic is already discussed on LW). I believe the thoughts I have are ones that would generally be net positive for the world if others had them or read them.

So, I’m going to share a long output Claude gave me in response to a detailed prompt, this article as context, and information about myself in the Project Files.

This is actually true for me now:

When you publish something, I want you to be asserting “this is on some reasonable frontier of what I could write given the effort it would take and the importance of the topic, indicating what I believe to be true and good given the presumed shared context”. It’s not plausible that LLM text meets that definition. (source)

I asked Claude to apply the hacker mindset to describe humanity’s current situation/​problem (≈ repeated zero sum games) and what can be done about it (≈ basically, more people realizing this information, thinking with hacker mindset themselves, becoming morally aligned).

Regarding Claude’s response, I’m curious:

  • What people think of the information. Does everyone on LessWrong already know all these concepts?

  • Do you agree with what is written?

  • Do you believe that most people outside LessWrong are unaware of this?

  • Do you agree that if more people were exposed to and internalized the information (either the doc directly, or just the underlying concepts) humanity would have a better chance at survival?

  • If the above are true, what’s the catch? Is it “yeah we know you have to improve human thinking, but how are you going do that, that’s what we’ve been trying to do for a decade?”

But first...

My opinion on the information & on moral alignment

This is probably going to be simplistic and naive, but here is my conception of morality and what the implications of the information are.

I.

Let’s simplify the entire human subjective experience to “feel good” and “feel bad” as two ends of the axis (each corresponding to states in the brain). When life is going good and evolutionary needs are met, we “feel good,” and when life is bad, there is nothing for us to look forward to, we are eating low quality food, and we don’t have the power to get ourselves out of situations that cause us suffering/​create life circumstances that would make us “feel good,” we “feel bad.”

Everything is subjective. Some people would hate to be slapped in the face. Other people like it. Etc.

However, “everything is subjective, so it’s not possible to claim that X action makes society ‘better’ and Y action makes society ‘worse’” is not a good place from which progress can be made. So, let’s make it objective! More humans having more of the ability to enter the “feel good” brain-state when they desire is moral, and more humans being forced to enter the “feel bad” brain-state is immoral. “Oh so you’re saying that ASI tiling the universe with hedonium-” Let’s not go there. I assume that a human would feel worse if they lost autonomy over their brain-states, or if the world was such that everyone else was getting wireheaded.

A society where everyone has access to plentiful resources and has the freedom to decide when to suffer is good. A society where not everyone has access to plentiful resources (less ability for brain to make feel-good neurochemicals) and is forced into suffering against their will is bad.

Note that I did not say “has the freedom to pursue activities that make them happy” is good, because that phrasing has the risk of missing an important element of morality.

II.

Everything is connected.

I actually gained this frame on reality after reading Ted Kaczynski’s manifesto, which is probably a red flag, but hear me out. I don’t even remember exactly what he said, but it was something like technology is making everyone’s lives worse, and nobody has the ability to escape this system now (ex. you can’t choose to live a pedestrian life if there are roads and cars everywhere), so the only solution is to destroy everything and start over. Return to monke basically.

I do not agree with this, a) violence bad and b) it is impossible to return to monke, so the only solution is increased human maturity/​cognition to match how advanced the tech is and will be. However, reading this is what caused me to look around my room and realize, WTF, every object in my room was made by people that live thousands of miles away from me and that I will probably never meet. We really are far more connected than ever before, and definitely compared to the hunter-gatherer small-group society stuff TK is idealizing. This also means that it is now impossible for the actions of every human to NOT impact millions of other people. Even if we don’t realize it. If this needs further argument: creating an object that gets shipped across the world is one example. Simply having a conversation with someone, that changes their thoughts, that changes their actions, that impacts another person they interact with, that then goes on to- is another example. Every person has ACCESS to WAY more people now, so EVERY decision someone makes results in a butterfly effect that changes other people’s worldstates, which inevitably implicates morality (those other people’s increased or decreased ability to “feel good” or “feel bad” when they choose[1]). Sure, we can ignore this perspective because it’s pointless (it’s not like you can estimate the resulting butterfly effects of any action you take or don’t take), but things WERE NOT always this way, when people were more separated.

This is what causes me to connect morality to other people/​society. It is impossible for morality to only be relative to the individual. Most people would probably agree with this: It is immoral to pursue experiences that make you “feel good” at the expense of other people “feeling bad.” However, if you claim that it is immoral for your goal to be “pursue experiences that ‘feel good’” (even if you’re not directly causing others around you to “feel bad”) it sounds like you’re saying, no what do you mean you want to be “happy,” think about the butterfly effects on other people, you need to donate 100% of your money to charity. And LW has covered in length how this is problematic.

Discussing “whether it’s moral for you to be happy” is obviously emotionally charged. It seems to imply that the conclusion is accusing someone of being immoral, which is unproductive.

If you see it from an objective lens, the question becomes “to what degree is the behavior you choose to adopt influencing everyone in your society’s ability to access resources, and have the freedom to decide when to ‘feel good’ or ‘feel bad’?” Aka, morality relative to the society we’re living in. I will just call this societal-morality (the degree to which that description is true when considering the experiences of all humans in that society).

So, I think that if I had the choice, I would rather live in a society where people are more concerned with how they impact societal-morality, and will sometimes sacrifice things that would personally make them “feel good” if they are aware that it would end to other people “feeling bad” (remember, this also involves the loss of control). Instead of a society where everyone only cares about whether they and the people around them that they can see feel good, while ignoring the impacts to people they do not know.

I do think that it would be more moral for people to seek awareness of how their actions can cause others to feel bad/​lose control, because you could choose to close your ears and never become aware, and that would be bad (a principle for if something is bad/​good: if everyone in society acted this way, would life become worse for you or better?)

However, I am not necessarily saying that the most moral society would be one where everyone is constantly aware/​trying to seek awareness of whether their current actions are causing butterfly effects that lead to plus or minus points for societal-morality. Remember I’m not approaching morality from an emotional personal-opinion standpoint and trying to moralize; I’m looking at the objective lens. A state of society that is like that would not be more moral, because it’s unproductive. It does not lead to increased societal-morality compared to alternatives (opportunity cost). You can’t get things done from a state of constantly worrying about awareness and worrying about each individual action. And the more people can get things done/​spend time improving their competence so they can get more things done in the future, the more ability they have to impact societal-morality.

However, you don’t always have to be hypervigilant about morality to be able to be more moral. Sometimes you are faced with a moral choice. For example, if many people are claiming that something you’re doing is immoral, even if you can’t be sure they are correct, it introduces the probability that what’s being said is true. So, choosing to ignore the information instead of deciding to investigate it would probably be immoral. So would thinking, “well what I’m doing is making my life good, and the people around me’s lives good, so even if changing my behavior could potentially increase the goodness of strangers’ lives, or even societal-morality as a whole, I’m not going to. Everyone has the right to focus on themselves and their family.”

III.

Let’s say I have the choice to become a billionaire right now and succeed, or to dedicate my life to increasing societal-morality while remaining poor and low-status, and succeed. It would suck, I would like being a billionaire, it would make me and people around me feel good for a significant length of time in the short-term (lot of money and power to cause future “feel good” experiences also!), but I would still choose the latter option.

Now let’s say I’m already a billionaire. I have to make the choice between continuing to enjoy the fruits of my labor, and no longer being a billionaire, losing much of what I’ve built, becoming poor, and using my competence to increase societal-morality.

It would suck even harder because humans are loss-averse, but I would still choose the latter option.

Now, what if I had to make that choice, but the results would last forever? What if I knew that there’s no coming back: my life will permanently be worse, I will be poor and low-status until I die, and that is the price I must pay for increasing societal-morality (which I’m hypothetically guaranteed to succeed at)? Then it might be impossible for me to make the “moral” decision. I don’t blame myself for that—I’m only human.

But I’m not making that choice. Nobody is making that choice. Because I don’t believe it’s a possible reality.

If I’m increasing societal-morality, the results are bound to return to me. There is no reality where ONLY other people benefit from my work, while I do not. Because everything is connected. That knowledge is what enables me to choose the moral option in the first two scenarios.

In fact, one of the best things I can do to increase societal-morality isn’t just me slaving away on my own, attempting to improve societal conditions and build better systems. It’s influencing other people to embark on the same journey. One wonderful thing about this is that you’re not just telling people to stop enjoying their lives and focus on figuring out how to solve society. Remember the objective lens: the more competent you are, the more you can do anything, including increasing societal-morality. So the highest opportunity-cost, most moral action that most people could take is to increase their own competence.[2] This will not only allow them to have a better chance at improving societal-morality, but to make their own lives and the lives of those around them better (the only thing our brains are evolutionarily wired to actually care about).

People jumping to “save the world” will not necessarily result in the outcome of saving the world. Maybe they’re not competent enough, or maybe they are very competent at getting their desired outcomes to happen, but they just aren’t solving the right problem, they’re optimizing for an inefficient outcome. This majorly important second factor is the hacker mindset: making sure you’re solving real problems and not just what seems to be the problem on the surface level. Solving the core problems that cause all other problems, not just solving the symptoms.

I would rather live in a society where more people are more competent AND morally aligned (aka aware of the reality that themselves being moral = they’re living in a society that has 1 more moral person = societal-morality and by extension their own “feel good” ability WILL increase as long as they are competent/​hacker mindset).

Morality:

  • I would rather live in a society where...

  • If everyone acts this way, does this increase or decrease my own “feel good” in the long term (not just short term)?

  • If everyone “should” act this way, because it would result in more moral outcomes, then I should act this way.

I would rather live in a society where people want to increase their own competence and think from the hacker mindset to be able to solve real problems.

I would rather live in a society where people are capable of making “selfless” decisions in the short-term. Where people are capable of choosing to take a hit to their “feel good”-ness, perhaps for an hour while they’re doing a task, or perhaps even for multiple years while they choose to pursue societal-morality instead of pursuing increasing their own wealth and status, because they know that if their efforts create a better society (the more competent they are the easier it is to believe this), and especially if other people are also putting effort into creating a better society, then the resources they’re able to access and their ability to “feel good” will increase greatly in the long term. To a far greater degree than if their actions were more short-sighted and optimizing for personal outcomes while ignoring societal outcomes. And definitely to a far greater degree than if everyone else behaved this way.

Despite all this, the only way I’m able to behave morally is if I believe that me doing so WILL result in a good outcome in the long term. Currently, I lack evidence that this is true: I don’t have much experience and I haven’t built anything. I need to increase my own competence before I’m able to fully believe that my actions can cause significant positive impact. (Those who don’t believe they’re even able to become more competent obviously won’t be able to work towards it.) What I do believe is that if multiple other people choose to a) increase their competence b) gain hacker mindset c) understand and act on the concept of societal-morality by sometimes sacrificing their short-term, personal “feel good” for long-term, societal “feel good”… MY own life would become a LOT better. And that’s what makes this future worth fighting for.

Look at all that humans have achieved. Look at how much we’ve built. Look at how much we’ve learned. Do you not believe that if everyone is morally aligned and fighting for a better future with higher societal-morality, unfathomable amounts of good can happen? Think of the best feeling you’ve ever felt, the happiest, most satisfying moment in your life.

(Actually do it right now: remember how it felt.)

That moment was only able to happen because “the stars aligned”: because the butterfly effect of millions of people that came before you, and that were existing at the same moment in time, combined to make that moment possible. Don’t you think that if humanity is deliberately working towards increasing their power and using that power to benefit all of society instead of only themselves and their circle, it would be possible to be that much more fulfilled in everyday life? We’d actually have control over our own happiness, because we’d have increased resources and quality of life, and every day new scientific research would be conducted (using hacker mindset!) on what actually improves health and happiness. Instead of other people’s competence harming you because they’re forced to compete against you in zero-sum games, their competence would be used in your favor. Instead of large bureaucracies and powerful people choosing to ignore their negative impacts on societal-morality, they would choose to take a step back and gain awareness in order to change their actions, because they understand that not doing so ultimately harms themselves along with everyone else.

This future isn’t possible if people don’t work towards it because they don’t believe it’s possible. But if everyone believes it’s possible, if everyone realizes they’re playing repeated Prisoner’s Dilemma where the most rational strategy is Tit for Tat, then that WILL make this future possible. Not just possible—it WILL happen. Because when rational agents choose to cooperate, the maximization of long-term benefits to all parties is the only possible outcome by the nature of the game. Therefore, the only thing that matters is instilling this belief in increasing numbers of people in society,[3] leading to moral behaviors, leading to moral outcomes. That is what I believe would save the world.

With all that said...[4]

Here is the prompt I gave to Claude:

Outline your knowledge of the hacker mindset/​first principles thinking, give an example that thoroughly covers its essence, then apply that thinking to humans, and to humanity and how global society and relations currently are. Articulate the human problem and what must be done to solve it. (Consider what systems are running in the world right now such as economics, then as a different layer of reality, human psychology/​consciousness, and ofc the deeper/​deepest level we can access = physics/​math)

The human 3.0 framework is there for context, but do not overly base your response in specific context of that or myself; i want to see what an ai would come up with without my direction (or maybe without even having all that knowledge of the metacrisis that a human already assembled for you thats steeped in their frame on reality; humans arent perfect at first principles thinking so access the deepest core of reality you can with your own capabilities)

So start from scratch if need be. You know daniel schmachtenberger is only 1 thinker. Perhaps there are deeper cores, perhaps there are frames truer to reality. What are humans not realizing? I myself am trying to develop the hacker mindset. I was trapped in conventional thought for a long time so it must be a journey ahead of me to unlearn the illusions and level up

And here is Claude’s response:

(whoops, forgot it auto-rejects LLM posts)

  1. ^

    Doing work might cause you to enter a “feel bad” state, but you would still choose to do it, because you can make money to make yourself “feel good” more later. You wouldn’t choose to keep “feeling good” for as many moments consecutively as possible. That’s just hedonic treadmill.

  2. ^

    This article about hacker mindset/​first principles thinking refers to this as “expanding the Reality box.”

  3. ^

    I’m imagining that with this information/​belief, actual billionaires & powerful people in our world, when they’re actually faced with the moral decision, would be able to use the same reasoning to make the moral choice! “Even if behaving morally makes my life temporarily become worse, the ‘feel good’-ness of my life will ultimately bounce back up to a far higher level in the long term than if I don’t make that choice. Because I am contributing to a society where other people are also all fighting for the common good of all humans, and I believe in their competence to make it happen, because I believe in all human competence including mine. Therefore, it’s worth doing the societally-moral action in this situation.”

  4. ^

    Lol, I asked Claude for feedback on the post, and it said my title is “honest but slightly undersells it” and suggested I rename it to “The prisoner’s dilemma is already solved — the only remaining problem is belief.” Honestly so true, but I’ll keep my own title on principle :)

No comments.