Are you doing this from within Obsidian with one of the AI plugins? Or are you doing this with the ChatGPT browser interface and copy/pasting the final product over to Obsidian?
Thank you for sharing this. FYI, when I run it, it hangs on “Preparing explanation...”. I have an OpenAI account, where I use the gpt-3.5-turbo model on the per-1K-tokens plan. I copied a sentence from your text and your prompt from the source code, and got an explanation quickly, using the same API key. I don’t actually have the ChatGPT Plus subscription, so maybe that’s the problem.
ChatGPT has changed the way I read content, as well. I have a browser extension that downloads an article into a Markdown file. I open the Markdown file in Obsidian, where I have a plugin that interacts with OpenAI. I can summarize sections or ask for explanations of unfamiliar terms.
On the server side, Less Wrong has a lot of really good content. If that content could be used to fine-tune a large language model… it would be like talking to ChatLW instead of ChatGPT.
Things like explanations and poems are better done on the user side, as you have done.
Is there a database listing, say, article, date, link and tags? That would give you the ability to find trending tags. It would also allow a cluster analysis and a way to find articles that are similar to a given article, “similar” meaning “nearest within the same cluster”.
I agree with the separation, but offer a different reason. Exploratory writing can be uncensored; public writing invites consideration of the reaction of the audience.
As an analogy, sometimes I see something on the internet that is just so hilarious… my immediate impulse is to share it, then I realize that there is no upside to sharing because I pretend to be the type of person who wouldn’t even think that was funny. Similarly, on more philosophical subjects, sometimes I will have an insight that is better kept private.
You see what I did there? If I were writing this in my journal, I’d include a concrete example. However, this is a public comment, and it’s smarter not to.
I copied our discussion into my PKM, and I’m wondering how to tag it… it’s certainly meta, but we’re discussing multiple levels of abstraction. We’re not at level N discussing level N-1, we’re looking at the hierarchy of levels from outside the hierarchy. Outside, not necessarily above. This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.
I was looking for real-life examples with clear, useful distinctions between levels.
The distinction between “books about books” and “books about books about books” seems less useful to me. However, if you want infinite levels of books, go for it. Again, I see this as a practical question rather than a theoretical one. What is useful to me may not be useful to you.
I don’t see this as a theoretical question that has a definite answer, one way or the other. I see it as a practical question, like how many levels of abstraction are useful in a particular situation. I’m inclined to keep my options open, and the idea of a theoretical infinite regress doesn’t bother me.
I did come up with a simple example where 3 levels of abstraction are useful:
Level 1: books
Level 2: book reviews
Level 3: articles about how to write book reviews
We’re using language to have a discussion. The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn’t prevent us from thinking together.
Similarly, using a PKM is like having an extended discussion with myself. The discussion is what matters, not the implementation details.
I view my PKM as an extension of my brain. I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory. You can make the distinction if you like, but I find it more useful to focus on the similarities.
As for meta-meta-thoughts, I’m content to let those emerge… or not. It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.
I don’t see your distinction between thoughts and notes. To me, a note is a thought that has been written down, or captured in the PKM.
No, I don’t have an example of thinking meta-meta-rationally, and if I did, you’d just ask for an example of thinking meta-meta-meta-rationally. I do think that if I got to a place where I needed another level of abstraction, I’d “know it when I see it”, and act accordingly, perhaps inventing new words to help manage what I was doing.
I am a fan of PKM systems (Personal Knowledge Management). Here the unit at the bottom level is the “note”. I find that once I have enough notes, I start to see patterns, which I capture in notes about notes. I tag these notes as “meta”. Now I have enough meta notes that I’m starting to see patterns… I’m not quite there yet, but I’m thinking about making a few “meta meta notes”.
Whether we’re talking about notes, memes or rationality, I think the usefulness of higher levels of abstraction is an emergent property. Standing at the base level, it’s hard to anticipate how many levels of abstraction would eventually be useful, but standing at abstraction level n, one might have a better idea of whether to go to level n+1. I wouldn’t set a limit in advance.
What’s the problem with infinite regress? It’s turtles all the way up.
One nanosecond is slightly less than to one meter.
No, one nanosecond is slightly less than one foot.
The problem statement says “arbitrary real numbers”, so the domain of your function P is -infinity to +infinity. P represents a probability distribution, so the area under the curve is equal to 1. P is strictly increasing, so… I’m having trouble visualizing a function that meets all these conditions.
You say “any” such function… perhaps you could give just one example.
Interesting. I think of heuristics as being almost the same as cognitive biases. If it helps System 1, it’s a heuristic. If it gets in the way of System 2, it’s a cognitive bias.
Not a disagreement, just an observation that we are using language differently.
Regarding the first enigma, the expectation that what has worked in the past will work in the future is not a feature of the world, it’s a feature of our brains. That’s just how neural networks work, they predict the future based on past data.
Regarding the third enigma, ethical principles are not features of the world, they are parameters of our neural networks, however those parameters have been acquired.
Regarding the second enigma, I am less confident, but I think something similar is going on. Here my metaphor is not the ML branch of AI, but the symbolic processing branch of AI. Or System 2 rather than System 1, to use a different metaphor. Logic and math are not features of the world, but features of our brains.
Right, and if doing computer-generated sudokus is a kata for developing the heuristics for doing sudokus, then perhaps solving computer-generated logic problems could be a kata for developing the heuristics for rationality.
I do sudokus. These are computer-generated, and of consistent difficulty. so I can’t solve them from memory. Perhaps something similar could be done for math or logic problems, or story problems where cognitive biases work against the solutions.
Is gradient hacking a useful metaphor for human psychology? For example, peer pressure is a real thing. If I choose to spend time with certain people because I expect them to reinforce my behavior in certain ways, is that gradient hacking?
I have taken a few MOOCs and I agree with your assessment.
MOOCs are what they are. I see them as starting points, as building blocks. In the end, I’d rather take a free, dumbed-down intro MOOC from Andrew Ng at Stanford, than pay for an in-person, dumbed-down intro class from some clown at my local community college. At least there’s no sunk cost, so it’s easy to walk away if I lose interest.