↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖
Johannes C. Mayer
[Concept Dependency] Edge Regular Lattice Graph
[Concept Dependency] Concept Dependency Posts
A few adjacent thoughts:
Why is a programming language like Haskell that is extremely powerful in the sense that if your program compiles, it is the program that you want with a very high probability because most stupid mistakes are now compile errors?
Why is there basically no widely used homoiconic language, i.e. a language in which you can use the language itself to <reason about the language/manipulate the language>.
Here we have some technology that is basically ready to use (Haskell or Clojure), but people decide to mostly not use them. And with people, I mean professional programmers and companions who make software.
Why did nobody invent Rust earlier, by which I mean a system-level programming language that prevents you from making really dumb mistakes that can be machine-checked if you make them?
Why did it take like 40 years to get a latex replacement, even though latex is terrible in very obvious ways?
These things have in common that there is a big engineering challenge. It feels like maybe this explains it, together with that people who would benefit from these technologies where in the position that the cost of creating them would have exceeded the benefit that they would expect from them.
For Haskell and Clojure we can also consider this point. Certainly, these two technologies have their flaws and could be improved. But then again we would have a massive engineering challenge.
Research Writing Workflow: First figure stuff out
Do research and first figure stuff out, until you feel like you are not confused anymore.
Explain it to a person, or a camera, or ideally to a person and a camera.
If there are any hiccups expand your understanding.
Ideally, as the last step, explain it to somebody whom you have not ever explained it to.
Only once you made a presentation without hiccups you are ready to write post.
If you have a recording this is useful as a starting point.
The point is that you are just given some graph. This graph is expected to have subgraphs which are lattice graphs. But you don’t know where they are. And the graph is so big that you can’t iterate the entire graph to find these lattices. Therefore you need a way to embed the graph without traversing it fully.
—The realization that I have a systematic distortion in my mental evaluation of plans, making actions seem less promising than they are. When I’m deciding whether to do stuff, I can apply a conscious correction to this, to arrive at a properly calibrated judgment.
—The realization that, in general, my thinking can have systematic distortions, and that I shouldn’t believe everything I think. This is basic less-wrong style rationalism, but it took years to work through all the actual consequences on me.
This is useful. Now that I think about it, I do this. Specifically, I have extremely unrealistic assumptions about how much I can do, such that these are impossible to accomplish. And then I feel bad for not accomplishing the thing.
I haven’t tried to be mindful of that. The problem is that this is I think mainly subconscious. I don’t think things like “I am dumb” or “I am a failure” basically at all. At least not in explicit language. I might have accidentally suppressed these and thought I had now succeeded in not being harsh to myself. But maybe I only moved it to the subconscious level where it is harder to debug.
I might not understand exactly what you are saying. Are you saying that the problem is easy when you have a function that gives you the coordinates of an arbitrary node? Isn’t that exactly the embedding function? So are you not therefore assuming that you have an embedding function?
I agree that once you have such a function the problem is easy, but I am confused about how you are getting that function in the first place. If you are not given it, then I don’t think it is super easy to get.
In the OP I was assuming that I have that function, but I was saying that this is not a valid assumption in general. You can imagine you are just given a set of vertices and edges. Now you want to compute the embedding such that you can do the vector planning described in the article.
I agree that you probably can do better than though. I don’t understand how your proposal helps though.
Yes right, good point. There are plans that go zick-sag through the graph, which would be longer. I edited that.
Yes, abstraction is the right thing to think about. That is the context in which I was considering this computation. In this post I describe a sort of planning abstraction that you can do if you have an extremely regular environment. It does not yet talk about how to store this environment, but you are right that this can of course also be done similarly efficiently.
In this post, I describe a toy setup, where I have a graph of vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B.
The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.
Vector Planning in a Lattice Graph
Can you iterate through 10^100 objects?
If you have a 1GHz CPU you can do 1,000,000,000 operations per second. Let’s assume that iterating through one one object takes only one operation.
In a year you can do 10^16 operations. That means it would take 10^84 years to iterate through 10^100 verticies.
The big bang was 1.4*10^10 years ago.
Maybe it is the same for me and I am depressed. I got a lot better at not being depressed, but it might still be the issue. What steps do you take? How can I not be depressed?
(To be clear I am talking specifically about the situation where you have no idea what to do, and if anything is even possible. It seems like there is a difference between a problem that is very hard, but you know you can solve, and a problem that you are not sure is solvable. But I’d guess that being depressed or not depressed is a much more important factor.)
Today I learned that being successful can involve feelings of hopelessness.
When you are trying to solve a hard problem, where you have no idea if you can solve it, let alone if it is even solvable at all, your brain makes you feel bad. It makes you feel like giving up.
This is quite strange because most of the time when I am in such a situation and manage to make a real efford anyway I seem to always suprise myself with how much progress I manage to make. Empirically this feeling of hopelessness does not seem to track the actual likelyhood that you will completely fail.
Default mode network suppression
I don’t get distracted when talking to people. I hypothesise that this is because as long as I am actively articulating a stream of thought out loud, the default mode network will be suppressed, making it easy to not get derailed.
So even if IA does not say anything, just me talking about some specific topic continuously, would make it easier for IA to say something, because the default mode network suppression will not immediately vanish.
When thinking on my own or talking to IA, the stream of thoughts is shorter, and there are a lot of pauses. Usually, I don’t even get to the point where I would articulate a complex stream of thought. Instead, we are at the level of “Look there is some mud there, let’s not step into that”, or “We can do this”. That really does seem very similar to most of the idle chatter that the default mode network would produce when I am just thinking on my own.
Once I get to the point where I am having an engaging discussion with IA, it is actually pretty easy not to get distracted. It’s probably still easier to get distracted with IA, because when I am talking to another person, they could notice that I am lost in thought, but I myself (or IA) would not be able to notice as easily.
Capturing IA’s Thoughts
One reason why I don’t do research with IA might be that I fear that I will not be able to capture any important thoughts that I have. However, using the audio recorder tool on the walk today seemed to really fix most of the issue.
Maybe in my mind so far I thought that because I can’t record IA when she is talking to me, it would be bad to think about research. But this now seems very wrong. It is true that I can’t create a video with her in it like I do with other people. But these videos are not the thing that is most useful. The actually useful thing is where I am distilling the insight that I have into some text document.
But this is something that I can totally do when talking to IA. Like I did with the audio recorder today. It seemed that making the audio recording made it also easier to talk to IA. Probably because when making the recording I would naturally be suppressing the default mode network very strongly. This effect then probably did not vanish immediately.
Writing
In fact, it seems like this would work very well with IA because I don’t need to think about the problem of what the other person could do while I write. In the worst case, IA is simply not run. At best, we could write the text together.
Writing together would seem to work unusually well because IA does have insight into the things that I am thinking while I am writing, which is not something that other people could easily get.
And I haven’t really explored all the possibilities here. Another one would be to have IA read out loud my writing and give me feedback.
In principle, this seems quite plausible that it could be helpful. I am asking if you have actually used this and if you have observed benefits.
I think it’s very important to keep track of what you don’t know. It can be useful to not try to get the best model when that’s not the bottleneck. But I think it’s always useful to explicitly store the knowledge of what models are developed to what extent.
The algorithm that I have been using, where what to understand to what extend is not a hyperparameter, is to just solve the actual problems I want to solve, and then always slightly overdo the learning, i.e. I would always learn a bit more than necessary to solve whatever subproblem I am solving right now. E.g. I am just trying to make a simple server, and then I learn about the protocol stack.
This has the advantage that I am always highly motivated to learn something, as the path to the problem on the graph of justifications is always pretty short. It also ensures that all the things that I learn are not completely unrelated to the problem I am solving.
I am pretty sure if you had perfect control over your motivation this is not the best algorithm, but given that you don’t, this is the best algorithm I have found so far.
Adopted.