Towards an Algorithm for (Human) Self-Modification

LessWrong is wonderful. Life-changing. Best thing that ever happened to me.

But it’s not really enough to make one a rationalist, is it? I don’t assimilate or even remember all of the knowledge contained in what I read, and I certainly don’t dynamically incorporate it into my life-strategy.

Say you want your computer to be able to open Microsoft Word files. In order to do this, you do not upload a PDF which contains a description of how Microsoft Word works. No, you install the program and then you run the program.

Over several months of reading LessWrong I found myself wishing I had (a) computer program(s) that could train me to be a rationalist instead of a website that told me about how to be a rationalist. I would read an article with a tremendous sense of excitement, thinking to myself, “This is it, I have to implement this insight into my life. This is a change that I must realize.” But I would inevitably hit a mental wall when I saw that just knowing that something was a good idea didn’t actually rewire my brain toward better cognitive habits.

I wanted a rationality installer.

I found myself in the midst of a personal crisis. I came to suspect that the reason for my unhappiness and akrasia was that my goals and my actions had become decoupled—I just couldn’t figure out where, or how.

So I set out to make a program that would help me organize what my actual terminal goals and values are, and then help me causally connect my day-to-day activities with these goals and values. The idea was to create a kind of tree with end-goals at the parents and daily tasks as the children. The resulting application was not very user-friendly, but it still worked.

With the help of my program, I saw that a year ago, I was very happy with my life because all the activities I pursued on a daily basis were very high-utility and directly connected to the achievement of my goals. I saw that I had recently formed a new long-term goal, the existence of which altered my utility function, but I had not altered my life to sufficiently accommodate this new goal. I made some changes in my life which I thought were going to be painful sacrifices, but ended up feeling exactly right once I crossed the threshold. It shocked me how quickly I felt better, how completely I returned to “normal.”

And I thought to myself, hey, why do our cognitive algorithms have to actually be inside our heads? I implemented this one into C++ and it helped me sort out something which was just frustrating and painful and confusing when I tried to manage it on my own.

What other rationality techniques deserve to be coded into “rationality assistant applications?”

(And how much of a desire would there be for such products?)