↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖
Johannes C. Mayer
Yes, but now try moving the heading to a different file.
Yes, that is a good point. I think you can totally write a program that checks given two lists as input, xs and xs’, that xs’ is sorted and also contains exactly all the elements from xs. That allows us to specify in code what it means that a list xs’ is what I get when I sort xs.
And yes I can do this without talking about how to sort a list. I nearly give a property such that there is only one function that is implied by this property: the sorting function. I can constrain what the program can be totally (at least if we ignore runtime and memory stuff).
To test whether Drake’s circumvention of his short-term memory loss worked via the intended mechanism, I could ask my girlfriend in advance to prompt me once — and only once — to complete the long-term memory scene that I had been practicing. Then I could see if I have a memory of the scene after I fully regain my memory.
Maybe you need to think the thought many times over in order to overwrite the original memory. In your place, I would try to prepare something similar to what Drake did. Some mental objects that you can retrieve have a predesigned hole to put information. To me, it seems like this should not be that hard to get. Then for ideally 30 minutes or so (though the streaming algorithm experiment seems also very interesting) after the surgery when you don’t have short-term memory, you can repeatedly try to insert some specific object in the memory.
Maybe it would make sense for the sake of the experiment to limit yourself to 3 possible objects that could be inserted. Your girlfriend can then choose one randomly after surgery, for you to drill into the memory, by repeatedly thinking about the scene completed with that specific object.
Then after the 30 minutes, you do something completely different. Then 1 hour afterwards your girlfriend can ask you what the object was that she told you 1 hour ago. Well and probably many times during the first 30 minutes.
Probably it would be best if your girlfriend (or whatever person is willing to do this) constantly reminds you during the first 30 minutes or so that you need to imagine the object. Probably at least every minute or so.
I don’t know about making god software, but human software is a lot of trial and error. I have been writing code for close to 40 years. The best I can do is write automated tests to anticipate the kinds of errors I might get. My imagination just isn’t as strong as reality.
I think it is incorrect to say that testing things fully formally is the only alternative to whatever the heck we are currently doing. I mean there is property-based testing as a first step (which maybe you also refer to with automated tests but I would guess you are probably mainly talking about unit tests).
Maybe try Haskell or even better Idris? The Haskell compiler is very annoying until you realize that it loves you. Each time it annoys you with compile errors it actually says “Look I found this error here that I am very very sure you’d agree is an error, so let me not produce this machine code that would do things you don’t want it to do”.
It’s very bad at communicating this though, so it’s words of love usually are blurted out like this:
Don’t bother understanding the details, they are not important.
So maybe Haskell’s greatest strength, being a very “noisy” compiler, is also its downfall. Nobody likes being told that they are wrong, well at least not until you understand that your goals and the compiler’s goals are actually aligned. And the compiler is just better at thinking about certain kinds of things that are harder for you to think about.
In Haskell, you don’t really ever try to prove anything about your program in your program. All of this you get by just using the language normally. You can then go one step further with Agda, Idris2, or Lean, and start to prove things about your programs, which easily can get tedious.
But even then when you have dependent types you can just add a lot more information to your types, which makes the compiler able to help you better. Really we could see it as an improvement to how you can tell the compiler what you want.
But again, you what you can do in dependent type theory? NOT use dependent type theory! You can use Haskell-style code in Idris whenever that is more convenient.
And by the way, I totally agree that all of these languages I named are probably only ghostly images of what they could truly be. But I guess humanity cares more about making javascript run REALLY REALLY FAST.
And I got to be careful not to go there.
I feel you on being distracted by software bugs. I’m one of those guys that reports them, or even code change suggestions (GitHub Pull Requests).
Yeah, I also do this. My experience so far generally has been very positive. It’s really cool when I make an issue with “I would think it would be nice if this program does X”, and then have it do x in 1 or 2 weeks.
I don’t know where to open an issue though about that I think it would be better to not build a god we don’t comprehend. Maybe I haven’t looked hard enough.
Let xs be a finite list of natural numbers. Let xs’ be the list that is xs sorted ascendingly.
I could write down in full formality, what it means for a list to be sorted, without ever talking at all about how you would go about calculating xs’ given xs. That is the power I am talking about. We can say what something is, without talking about how to get it.
And yes this still applies for constructive logic, Because the property of being sorted is just the logical property of a list. It’s a definition. To give a definition, I don’t need to talk about what kind of algorithm would produce something that satisfies this condition. That is completely separate.
And being able to see that as separate is a really useful abstraction, because it hides away many unimportant details.
Computer Science is about how-to-do-X knowledge as SICP says. Mathe is about talking about stuff in full formal detail without talking about this how-to-do-X knowledge, which can get very complicated.
How does a modern CPU add two 64-bit floating-point numbers? It’s certainly not an obvious simple way, because that would be way too slow. The CPU here illustrates the point as a sort of ultimate instantiation of implementation detail.
Yes, I totally agree. But I kind of got distracted with this post and wanted to get back to work as quickly as possible. But instead of making it a perpetual draft I pushed a bit further and got some MVP thing. I agree it’s not that good, and adding concrete examples, and really making this more like a tutorial would be the first step I would take.
So I am wondering if doing something like this is still useful. Would it be worse if I had made it a perpetual draft or better?
“If you are assuming Software works well you are dead” because:
If you assume this you will be shocked by how terrible software is every moment you use a computer, and your brain will constantly try to fix it wasting your time.
You should not assume that humanity has it in it to make the god software without your intervention.
When making god software: Assume the worst.
If you are assuming Software works well you are dead
Mathematical descriptions are powerful because they can be very terse. You can only specify the properties of a system and still get a well-defined system.
This is in contrast to writing algorithms and data structures where you need to get concrete implementations of the algorithms and data structures to get a full description.
The Model-View-Controller architecture is very powerful. It allows us to separate concerns.
For example, if we want to implement an algorithm, we can write down only the data structures and algorithms that are used.
We might want to visualize the steps that the algorithm is performing, but this can be separated from the actual running of the algorithm.
If the algorithm is interactive, then instead of putting the interaction logic in the algorithm, which could be thought of as the rules of the world, we instead implement functionality that directly changes the underlying data that the original algorithm is working on. These could be parameters to the original algorithm, which would modify the runtime behavior (e.g. we could change the maximum search depth for BFS). It could also change the current data the algorithm is working on (e.g. in quicksort we could change the pivot, or smaller_than_list just before they are set). The distinction is somewhat arbitrary. If we were to step through some Python code with a debugger, we could just set any variables in the program.
Usually, people think of something much “narrower” when they think about the Model-View-Controller-Architecture.
We could also do the same for a mathematical description. We can write down some mathematically well-defined thing and then separately think about how we can visualize this thing. And then again, separately, we can think about how would we interact with this thing.
I like this a lot. You are programming for the human brain now.
It also seems nice that can implement each step in this program in executable code, and then you don’t need to perform that step manually. You might even write little helper programs to perform the tasks, which can then later be used if you decide to automate the script more.
It also seems useful for developing an algorithm. Often I might want to understand what my brain is doing and have an algorithm that emulates what my brain is doing.
Maybe this method can be used to write down a very high-level description of the steps that I think my brain is doing and then see whether me now following that procedure blindly leads to the correct result. If you get the correct result, you can then try to fill in more details, automating each individual step a bit more, and then checking if you still get the same result, and so on.
How to write Pseudocode and why you should
Whiteboard Program Traceing: Debug a Program Before you have the Code
I think this post would be much more effective in achieving its goal if it would provide alternatives.
What are the advantages of posting your research ideas on LessWrong? Are there other ways in which you can get these advantages? Are there maybe even alternatives that give you more of the thing you want?
I expect telling people about these alternatives (if they exist) would make them more likely to make use of them.
One of the main things I think people can get by publishing their research is to get feedback. But you could also search for people who are interested in what you are working on. Then you can send your write-ups only to these people.
Also seeing people engage with things that you write is very motivating.
These are just some rough examples as I don’t think I have very good models about what you can get out of LessWrong and how to get the same benefits in different ways.
[Question] What are the Activities that make up your Research Process?
[Question] How do you Select the Right Research Acitivity in the Right Moment?
Seems pretty good to me to have this in a video call to me. The main reason why don’t immediately try this out is that I would need to write a program to do this.
Detangle Communicative Writing and Research
One reason why I never finish any blog post is probably because I’m just immediately starting to write it. I think it is better to first build a very good understanding of whatever I’m trying to understand. Only when I’m sure I have understood do I start to create a very narrowly scoped writeup?
Doing this has two advantages. First, it speeds up the research process, because writing down all your thoughts is slow.
Second, it speeds up the writing of the final document. You are not confused about the thing, and you can focus on what is the best way to communicate it. This reduces how much editing you need to do. You also scope yourself by only writing up the most important things.
See also here more concrete steps on how to do this.
In principle, you could use Whisper or any other ASR system with high accuracy to enforce something like this during a live conversation.
Another thing that Haskell would not help you at all with is making your application good. Haskell would not force obsidian to have unbreakable references.