I was especially thinking of your recent post “on goal models”. I tried to look up other posts, then I just saw your post on distributed agents and then this clicked and I feel like I better understand now where you are going with this. I find distributed agents and how to think about them confusing.
Morpheus
With most of your posts I already agree with the conclusion, I disagree and am still not convinced after reading the post, or the topic seems really confusing and after reading your post the topic seems more confusing than before. With this post I thought I had a good explanation, but now I see it wasn’t adequate. When I saw the title of this post I was thinking this was going to be interesting. This post genuinely changed my mind.
Some more completionism for proteins that don’t turn over or only extremly slow in mammals:
Rec8 in Oocytes (it holds the chromosomes together)
Some histones (Histone H3.1 in this rat study)
Some proteins in the nuclear pore complex (Nup205, Nup93 etc.)
Nuclear pore complex proteins not turning over in post-mitotic cells suggests a new hypothesis for aging I haven’t heard before: Most cells turn over, but there are quite a few cell types that don’t or extremely slowly (brain, skeletal muscle, heart (only slowly), oocytes, podocytes etc.). Most things inside of those cells do turn over, but some structures only on the scale of decades (see above). I can see nuclear pore complex damage and histone damage feeding into DNA damage. If cells that don’t turn over spread their damage to cells that do, I could see how this might be the upstream ratchet on the “The DNA Damage <-> Mitochondrial ROS” loop, so it’s not the stem cells after all. A weakness of this model is that nuclear pore protein mutations aren’t really associated with progerias. The first thing I could think of to test this was looking at what happens if you transfer different organs from young to old mice and vice versa. Will report back what I find.
Artificial Yeast exist now! So someone might want to run the experiment!
I am less excited about doing this in yeast than when I wrote this comment, but overall would still find it interesting. I find it more likely now the bottlenecks in Saccharomyces cerevisiae are sufficiently different from multicellular eukaryotes that things might not translate (even if transposons were the root cause in mice and humans). Most importantly, in yeast since they are single celled, we are mostly talking about aging of the mother cells, that split asymetrically from their daughter cells. Evolution then found weird tradeoffs that lead to bugs in mother cells with improved results in daughter cells:
For example Extrachromosomal rDNA circles (ERC) accumulates in mother cells and shortens their lifespan, but somehow this mostly doesn’t end up in the daughter cells. From my shallow investigation it seems relatively well established that deleting Fob1 significantly improves the lifetime of mother cells. The story being Fob1 is involved in the repair of rDNA, which improves growth potential of the daughter cells, but creates these ERCs that accumulate in the mother sometimes accidentally. There might be many similar examples like this, but none of these problems really translates to multicellular organisms, where this gets solved the same way as in yeast colonies through selection.
If transposons were the root cause, for aging in mother cells, then just going through gametogenesis should not reset the life-span of yeast, but it does (note though that I couldn’t quickly find a replication of this paper). When yeast perform meiosis, they split their nuclei into 4 spores and a fifth dumpster compartment leaving behind ERCs and other things that would hurt the spores.
Molecular dynamics was also the first counterexample I was thinking of.
So physical chemistry textbooks will talk about the MD code but NOT talk about the subtle detailed aspects of interacting methanol molecules that distinguish a −98°C freezing point from −96.
Using heuristics here get’s easier though if you require less precision. I actually think that textbook could totally be written. Maybe not for why it is −98 rather than −96, but different heuristics and knowing the boiling points of other molecules should get you quite far (Maybe why it is −98 rather than −108). I would absolutely read that textbook.
From my notes for the book 0 to 1: “Competition is destructive and not a sign of value”.
For your alpha, look for secrets (things you know or are confident in, but no one else is). Create something that is 10x better than any alternative. Don’t start the next restaurant. You want to be the next monopoly.
Not the focus of that book, but personally, I would also like to create value, not only capture it. So I’d aim not to start the next Elsevier, Coca-Cola, or Facebook, even though they have great profit margins. There are good and bad monopolies.
The problem I’m trying to understand is more of a meta/proof-theoretic one: why do some arithmetical claims have a proof only when passing through non-arithmetical language?
I agree this is an interesting question. Thanks for pointing me to the speed-up theorem. I didn’t know about that one. :)
This sounds horribly inefficient; intuitively, it sounds like that any “natural” statement provable in PA should be provable using tools from this system, and not by encoding concepts from a different field.
Yeah, I don’t share that intuition. It feels like if that was true, there would be no other fields and everyone would be using arithmetic for everything at all times. I guess your phrasing of “natural” is doing a lot of work here.
There are fundamental rules of the universe that I don’t yet understand. And for some reason, one of them seems to spell out: “thou shall use complex-valued analysis to study the behavior of prime numbers”.
I am not at all a number theory expert and I am not quite sure what shape of explanation you are looking for here. One possible explanation I have here though is that you might be missing the forest for all the trees. From my outsider perspective the connection is already obvious in your introduction: Prime numbers → modular arithmetic ~= arithmetic on circles → complex numbers
If you have a problem involving operations on circles throwing complex analysis at it seems like the type of thing you would want to throw at the problem. The arrow Prime numbers → modular arithmetic seems actually more worthy of a good compressed explanation.
I just watched the Veritasiums video. The forest fire simulation really made self-organized criticality click for me, when the sand pile analogy absolutely hadn’t (though to be fair I had only read the vague description in Introduction to Complex Systems, which is absolutely inadequate compared to just seeing a simulation).
If the thing was really symmetrical like the post describes it should definitely be exploitable by someone, not necessarily smart, but with unconventional preferences?
Interesting! Makes sense.
If there’s a way to make this version work for non-naive updates that seems good, and my understanding is it’s mostly about saying for each new line “given that the above has happened, what are the odds of this observation?”
Yes that’s it. Yeah I am not trying to defend the probability version of bayes rule. When I was trying to explain bayes rule to my wordcel gf, I was also using the odds ratio.
This version though? This I think most people could remember.
By most people you mean most people hanging around the lesswrong community because they know programming? I agree, an explanation that uses language that the average programmer can understand seems like a good strategy of explaining Bayes rule given the rationality communities demographics (above average programmers).
Maybe this is a case of Writing A Thousand Roads To Rome where this version happened to click with me but it’s fundamentally just as good as many other versions. I suspect this is a simpler formulation.
Was it the code or the example that helped? The code is mostly fine. I don’t think it is any simpler than the explanations here, the notation just looks scarier.
Either someone needs to point out where this math is wrong, or I’m just going to use this version for myself and for explaining it to others
This version is correct for naive bayes, but naive bayes is in fact naive and can lead you arbitrarily astray. If you wanted a non-naive version you would write something like this in pseudopython:
for i, E IN enumerate(EVIDENCE): YEP *= CHANCE OF E IF all(YEP, EVIDENCE[:i]) NOPE *= CHANCE OF E IF all(NOPE, EVIDENCE[:i])I see the case for starting with the naive version though, so this is more of a minor thing.
I don’t see a lot more going for the bear example except for it being about something dramatic, so more memorable. Feels like you should be able to do strictly better examples. See Zane’s objections in the other comment.
I like this post a lot. It might explain why I feel like an expert at addition, but not on addition. I notice when I am struggling with things like this in math, I often start blaming my own intellect instead of trying to understand what is making this hard and if this is perhaps just bad design that is to blame. The second approach seems much more likely to solve the problem. Noticing that word problems are harder seems like a good thing to notice, especially if you want to become an expert at using a particular math tool. For example I don’t think I currently really get exterior products and searching for relevant word problems might be a good way to practice. LLMs might be useful in creating problems I can’t solve (although I found it astonishing a while ago when Sonnet 3.5 was not able to consistently create word problems for applying bayes rule (~50% were just wrong)).
Suppose the agent’s utility function is concave, i.e. the agent prefers (50% double wealth, 50% lose everything) over (100% wealth stays the same).
I think you meant to write convex here.
Nice work in keeping up your public journal.
There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.
I currently believe that the factor that explains most of this remaining variance is “paranoia”. In-particular the kind of paranoia that becomes more adaptive as your environment gets filled with more competent adversaries. While I am undoubtedly not going to succeed at fully conveying why I believe this, I hope to at least give an introduction into some of the concepts I use to think about it.
I don’t know if this was intended, but up until the end I was reading this post thinking you meant in this paragraph that the variance is explained by people not being paranoid enough or not paranoid in the right way and that is why you explain in this post how to be paranoid properly.
I like this post. What I really wish though was if I was better at explaining this to my friends and family. Has anyone on here ever had any success explaining this to an extent where you feel like the other person is really getting it? Perhaps I should become a truth cleric for a while and see if I can convert literal people on the street.
I like this post. I’ve been thinking for a while that I feel like I am doing pretty well in terms of epistemic rationality, but I have quite some trouble figuring out what I want or what I even endorse on reflection. I noticed with your wizard post that this was not something I would ever have come up with, because I would not have looked for “true names” of the thing I want in fiction.
Below I was brainstorming some examples where I could get more of what I want.
Notice: With my ego-dystonic wants I probbaly have more room for improvement. Perhaps the goal should be to not have ego-dystonic wants? They are the main drivers why I have a hard time with agentic-ness.
With Ego-syntonic wants, I already do this. For example just before reading this post, I was asking myself if there could be a company doing long-read sequencing for consumers like John who are peculiar and want to understand themselves better (soon concluded this would be worse than MetaMed, so then thought about other people who might be interested in long-read sequencing).
My ego-dystonic interests I don’t know that well how to deal with. I remember one of my post-rationalist friends commenting that it seems like I seem to only do things I consider useful. For example I tried to get rid of all my useless hobbies I pursued in the past after they ceased being useful. An ego-dystonic interest that I don’t know how to integrate well in a useful way is competitiveness. I get absolutely addicted to improving and competing on metrics. Number go up! For example, hobbies/games that sucked me deep in the past include: juggling, cubing, chess, dominion, learning all japanese kanji with anki (and just staring at the stats ~5-25% of the time), making predictions on metaculus (trying to not be too tempted to maximize points), the universal paperclips game etc..
I now don’t pursue any of the above, because improving on these doesn’t give me enough improvement in other areas of my life I care about. I also notice unless there is a competitive element where I feel like I have worthy competition, the metrics loose their appeal after some time. Problem with Japanese was also that the only reason to do this particular one was to proove to myself that memorization is not that hard. I recently started using anki more again to remember math and science knowledge, but it doesn’t quite feel as addictive when I have to curate all the cards myself. With the kanji, I had premade cards. I was allowed to just grind through.
With Metaculus I had strong frustration that the thing I was competing on was easily goodharted into something that wasn’t teaching me anything. I enjoyed Manifold because the incentives were in line, but then the new problem was that this was incentivising me to be more distracted than I would like, so I stopped using Manifold much. I absolutely loved the thinking physics question challenge. My main bottleneck here was friends who were capable and motivated enough to compete with. I had thought of starting a local workshop in Melbourne to work together on problems we don’t understand. My thinking there was that the hard step seems to be finding problems that everyone is excited to work on. Now I am thinking the best solution is probably just having some array of challening problems to pick from and then you choose something that everyone finds interesting. Perhaps the first challenge is to come up with lots of cool problems.
Part of me is thinking though, tradeoffs are terrible. Perhaps playing chess, cubing or playing Zelda some of the time and spending some of the other time working on illegible problems despite less outside motivation might be the way to go. Obviously, most of the real value is in places where no one can compete with you sadly. Any place where it’s convenient to compete (online games with elo matching being the prototypical example), is where the least of the value is. Finding creative ways to improve my skills by being motivated by competition might be an exception here though. Like running workshops of the sort Raemon is running.
Hm… writing this took me 90 minutes. Ben claims you can write a reasonably long lesswrong comment in under 30 minutes. I already failed the halfhaven challenge, because I would not be able to think of something neat that felt like a round idea to put in a blogpost. Also writing my blogposts took way too long. I did notice that the 500 word lower limit was holding me back there in not publishing short things (I hated the blogpost drafts where I would have a neat 100 word idea and then expanding them to 500 words felt absolutely impossible and wrong). I do think I often like reading rambly comments. I don’t like reading super rambly comments. I do find it hard to find the balance here (in general I find it hard to write about internal conflicts as they are happening). Here at the end I went back and forth between writing out what I thought was my takeaway from this. I do think internal conflict is a huge part that makes my writing slow.
Mitochondria and Lysosomes could also run into issues in post-mitotic cells, as well as other things I haven’t thought of.