To Raemon: bet in My (personal) Goals

LessWrong Context

I’ll take advantage of the fact that someone influential in the rationalist sphere asked me for something more personal in bet in my personal goals[here]

to see if I can make more contacts in the community and maybe even find collaborators. Best case: I get a comment from Eliezer. Worst case: I get 100 downvotes. For me, that’s just like throwing a fart into a room that’s already full of shit.

So how do I bet on myself in three steps, but first a little more context

The shitty side of my story

I used to be a military firefighter. I dedicated my life to that, as I’ve mentioned here before, and also in other firefighter stories that Gwern once asked me to share. Thanks again, Gwern, hehe. [link to Gwern’s request]

Context of my main error (likely)

When I left my position as a firefighter focused on social education in Brazilian slums, my conclusion was: I was just bothering people. After realizing that my colleagues never responded when I pointed out the diversion of funds from children’s social projects, I thought: well, I guess I’m the only one who cares about this. Not even my best friends believe me. What should I do? Better to step aside.

Maybe I fell into an evolutionary error: if you annoy your tribe, drain resources, and don’t contribute to survival, then the “better strategy” is to die. So I prepared myself to die the way one prepares an Excel spreadsheet: columns, colors, and formulas.

I did everything I could to push everyone away so they wouldn’t miss me—basically questioning their beliefs and insisting on pointing out errors they didn’t want to see. I still hoped maybe someone would support me, proving my “I annoy the tribe” hypothesis wrong.

Salvation by an SOB

By luck or chance, I managed to drive away all my friends and family. I was ready to give away everything I had left—an apartment—to the one person who hadn’t abandoned me. But that SOB noticed what I was about to do and refused to accept it. Someone actually rejected something extremely valuable, just to stop me from screwing up. I thought: well, maybe I’m wrong. Maybe there are people like me, a tribe I haven’t found yet, who care about being better rather than just making money or keeping jobs. Maybe here on LessWrong. Oh, how beautiful that would be!

Turning point toward rationality

That led me to study more and search for my mistakes. I realized I was way too deterministic and that I needed to think in a broader probabilistic spectrum. Another new friend forgave me and insisted I read LessWrong, Superforecasting, and other incredible material. I remembered a book I once skimmed—because back then I just wanted to be the “funny firefighter” I left it aside. Later I read The Drunkard’s Walk in three languages. And holy shit, how wrong I had been all my life. Or, more probabilistically than absolutely: how astronomically wrong I had been.

Extreme applications of probabilistic reasoning to myself

I then built models of how I could think as probabilistically as possible. I went as far as making hourly probabilistic evaluations for years, and I’ve been working on this ever since. For 11 years now, that “tough, funny firefighter” has been alive thanks to the tears of someone who valued my life over a lot of money.

How to filter my expectations into goals? How to evaluate my past without suffering for losing my job and nearly everyone close to me? How to assess my highest and lowest motivation moments as quickly as possible?

What I’m looking for now

I still haven’t fully found my tribe that makes life worth living. I mean, those I most identify with are here, but I haven’t yet gained the community’s attention for my ideas, even if people have valued my experiences. So here’s another one.

I’d love to exchange with anyone who has already tried applying probabilistic models to their own satisfaccions and motivations.

As for how I do this:

As for how I do this: for now, I start by defining expectations tied to values and goals.

  1. I ask questions that could help sort information by entropy level, so AI systems (and my own thinking) work better: Is this about process improvement or user benefit? Is it operational-level response or an improvement of the AI’s general code? Is it about information itself or about communication with the user?

2. Then I proposed what seemed most similar to human value dimensions, as a starting reference.

3. I defined essential sub-values necessary to understand the dimensions of values.

4. I looked at their relationships together: which dimension is most necessary for the others?

5. I confirmed I wasn’t going too far against evolutionary psychology.

6. From those, I connected eight latent dimensions with my expectations.

.

6. I defined a reference goal as “maximum benefit,” provided some evidence, and compared one by one in relation to that reference, giving a number to estimate the weight. I scored my general goals by benefit and established a hierarchy.

Then I took my hardest goal and compared it to the others, giva a number to compare, looking for evidence and comparing one by one in terms of costs.

With that, I can relate information entropy functions with human values, my expectations, my goals, and create a cost-benefit table for them. Then I refine it further, because it serves as a base to weigh factors like tasks, routines, and moments of satisfaction and motivation.

Thanks raemon, and If anyone here has interest to tried quantifying or modeling their own expectations and desires probabilistically, I’d love to compare opinions. Maybe I’ll finally find my tribe after all, or the first one.

No comments.