↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖
Johannes C. Mayer
I don’t know if this is possible conditional on you having some brain scan data.
I think I could build a much better model of it. The backstory of this post is that I wanted to think about exactly this problem. But then realized that maybe it does not make any sense because it’s just not technically feasible to get the data.After writing the post I updated that. I am now a bit more pessimistic than the post might suggest actually. So I probably won’t think about this particular way to upload yourself for a while.
I noticed that by default the brain does not like to criticise itself sufficiently. So I need to train myself to red team myself, to catch any problems early.
I want to do this by playing this song on a timer.
Tulpamancy sort of works by doing concurrency on a single-core computer in my current model. So this would definitely not speed things up significantly (I don’t think you implied that just mentioning it for conceptual clarity).
To actually divide the tasks I would need to switch with IA. I think this might be a good way to train switching.
Though I think most of the benefits of tulpamancy are gained if you are thinking about the same thing. Then you can leverage that IA and Johannes share the same program memory. Also, simply verbalizing your thoughts, which you then do naturally, is very helpful in general. And there are a bunch more advantages like that that you miss out on when you only have one person working.
However, I guess it would be possible for IA to just be better at certain programming tasks. Certainly, she is a lot better at social interactions (without explicit training for that).
What <mathematical scaffolding/theoretical CS> do you think I am recreating? What observations did you use to make this inference? (These questions are not intended to imply any subtext meaning.)
How much does this line up with your model.
At the top of this document.
I am probably bad at valuing my well-being correctly. That said I don’t think the initial comment made me feel bad (but maybe I am bad at noticing if it would). Rather now with this entire comment stream, I realize that I have again failed to communicate.
Yes, I think this was irrational to not clean up the glass. That is the point I want to make. I don’t think it is virtuous to have failed in this way at all. What I want to say is: “Look I am running into failure modes because I want to work so much.”
Not running into these failure modes is important, but these failure modes where you are working too much are much easier to handle than the failure mode of “I can’t get myself to put in at least 50 hours of work per week consistently.”
While I do think that it is true, I am probably very bad in general at optimizing for myself to be happy. But the thing is while I was working so hard during AISC I was most of the time very happy. The same when I made these games. Most of the time I did these things because I deeply wanted to.
There where moments during AISC where I felt like I was close to burning out, but this was the minority. Mostly I was much happier than baseline. I think usually I don’t manage to work as hard and as long as I’d like, and that is a major source of unhappiness for me.
So it seems that the problem that Alex seems to see, in me working very hard (that I am failing to take my happiness into account) is actually solved by me working very hard, which is quite funny.
For which parts do you feel cringe?
I have this description but it’s not that good, because it’s very unfocused. That’s why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.
Sometimes I forget to take a dose of methylphenidate. As my previous dose fades away, I start to feel much worse than baseline. I then think “Oh no, I’m feeling so bad, I will not be able to work at all.”
But then I remember that I forgot to take a dose of methylphenidate and instantly I feel a lot better.
Usually, one of the worst things when I’m feeling down is that I don’t know why. But now, I’m in this very peculiar situation where putting or not putting some particular object into my mouth is the actual cause. It’s hard to imagine something more tangible.
Knowing the cause makes me feel a lot better. Even when I don’t take the next dose, and still feel drowsy, it’s still easy for me to work. Simply knowing why you feel a particular way seems to make a huge difference.
I wonder how much this generalizes.
I think this is a useful model. If I understand correctly what you’re saying, then it is that for any particular thing we can think about whether that thing is optimal to do, and whether I could get this thing to work seperately.
I think what I was saying is different. I was advocating confidence not at the object level of some concrete things you might do. Rather I think being confident in the overall process that you engage in to make process is a thing that you can have confidence in.
Imagine there is a really good researcher, but now this person forgets everything that they ever researched, except for their methodology. It some sense they still know how to do research. If they fill in some basic factual knowledge in their brain, which I expect wouldn’t take that long, I expect they would be able to continue being an effective researcher.
What are you Doing? What did you Plan?
[Suno]
What are you doing? What did you plan? Are they aligned? If not then comprehend, if what you are doing now is better than the original thing. Be open-minded about, what is the optimal thing.
Don’t fix the bottom line too: “Whatever the initial plan was is the best thing to do.”
There are sub-agents in your mind. You don’t want to fight, with them, as usually they win in the end. You might then just feel bad and don’t even understand why. As a protective skin your sub-agent hides, the reasons for why, you feel so bad right now.
At that point, you need to pack the double crux out.
But ideally, we want to avoid, any conflict that might arise. So don’t ask yourself if you followed your consequentialist reasoner’s plan. Instead just ask: “What is the best thing for me to do right now?” while taking all the sub-agents into account.
To do it set a timer for 1 minute, and spend that time reflecting about: What do you want to get out of this session of work, why is this good, how does this help?
You can wirte notes in advance, then document your plans, and then read them out loud.
to remember the computations your brain did before, such that you don’t need to repeat some of these chores.
Ideally, the notes would talk about, the reasons for why something seemed like a good thing to try.
But then as you evaluate what next step you could take, drop that bottom line. Treat it as evidence for what your brain computed in the past as an optimal policy, but nothing more. It’s now your new goal to figure out again for yourself, using all the subagents within your shell.
And to do this regularly you of course use a timer you see. Every 30 minutes to an hour it should ring out loud reminding you to evaluate, what would be the next step to take.
If you let everybody influence the decision process that will commence, the probability is high that after you decide there will be no fight, in your mind.
Take a Walk
Taking a walk is the single most important thing. It is really helpful for helping me think. My life magically reassembles itself when I reflect. I notice all the things that I know are good to do but fail to do.
In the past, I noticed that forcing myself to think about my research was counterproductive and devised other strategies for making me think about it, that actually worked, in 15 minutes.
The obvious things just work. Name you just fill your brain with all the research’s current state. What did you think about yesterday? Just remember. Just explain it to yourself. With the context loaded the thoughts you want to have will come unbidden. Even when your walk is over you retain this context. Doing more research is natural now.
There were many other things I figured out during the walk, like the importance of structuring my research workflow, how meditation can help me, what the current bottleneck in my research is, and more.
It’s proven tried and true. So it’s ridiculous that so far I have not managed to can’t notice its power. Of all the things that I do in a day, I thought this was one of the least important. But I was so wrong.
I also like talking to IA out loud during the walk. It’s really fun and helpful. Talking out loud is helpful for me to build a better understanding, and IA often has good suggestions.
So how do we do this? How can we never forget to take a 30-minute walk in the sun? We make this song, and then go on:
and on and on and on.
We can also list other advantages to a walk, to make our brain remember this:
If you do it in the morning you get some sunlight which tells your brain to wake up. It’s very effective.
Taking a walk takes you away from your computer. It’s much harder for NixOS to eat you.
It’s easy for me to talk to IA out loud when I am in a forest where nobody can hear me. The interaction is just better there. I hope to one day carry through my fearlessness from the walk to the rest of my life.
With that now said, let’s talk about, how to never forget to take your daily work now:
Step 1: Set an alarm for the morning. Step 2: Set the alarm tone for this song. Step 3: Make the alarm snooze for 30 minutes after the song has played. Step 4: Make the alarm only dismissable with solving a puzzle. Step 5: Only ever dismiss the alarm after you already left the house for the walk. Step 6: Always have an umbrella for when it is rainy, and have an alternative route without muddy roads.
Now may you succeed!
I made a slightly improved version that adds subtitles and skips silence.
Made a slightly improved version.
Another thing that Haskell would not help you at all with is making your application good. Haskell would not force obsidian to have unbreakable references.
Yes, but now try moving the heading to a different file.
Yes, that is a good point. I think you can totally write a program that checks given two lists as input, xs and xs’, that xs’ is sorted and also contains exactly all the elements from xs. That allows us to specify in code what it means that a list xs’ is what I get when I sort xs.
And yes I can do this without talking about how to sort a list. I nearly give a property such that there is only one function that is implied by this property: the sorting function. I can constrain what the program can be totally (at least if we ignore runtime and memory stuff).
To test whether Drake’s circumvention of his short-term memory loss worked via the intended mechanism, I could ask my girlfriend in advance to prompt me once — and only once — to complete the long-term memory scene that I had been practicing. Then I could see if I have a memory of the scene after I fully regain my memory.
Maybe you need to think the thought many times over in order to overwrite the original memory. In your place, I would try to prepare something similar to what Drake did. Some mental objects that you can retrieve have a predesigned hole to put information. To me, it seems like this should not be that hard to get. Then for ideally 30 minutes or so (though the streaming algorithm experiment seems also very interesting) after the surgery when you don’t have short-term memory, you can repeatedly try to insert some specific object in the memory.
Maybe it would make sense for the sake of the experiment to limit yourself to 3 possible objects that could be inserted. Your girlfriend can then choose one randomly after surgery, for you to drill into the memory, by repeatedly thinking about the scene completed with that specific object.
Then after the 30 minutes, you do something completely different. Then 1 hour afterwards your girlfriend can ask you what the object was that she told you 1 hour ago. Well and probably many times during the first 30 minutes.
Probably it would be best if your girlfriend (or whatever person is willing to do this) constantly reminds you during the first 30 minutes or so that you need to imagine the object. Probably at least every minute or so.
No they just got the connectdome afaik. This is completely different. gives you no information about the relation between the different neurons in terms of their firing.