I am interested in participating in some type of commitment like this.
Morpheus
Consider Small Walks at Work
Some Biology Related Things I Found Interesting
I am not too surprised by this, but I wonder if he still stands by what he said in the interview with Dwarkesh:
Some years ago, in the 2010s, I did some analysis with other people of — if this kind of picture happens then which are the firms and parts of the economy that would benefit. There’s the makers of chip equipment companies like ASML, there’s the fabs like TSMC, there’s chip designers like NVIDIA or the component of google that does things like design the TPU and then there’s companies working on the software so the big tech giants and also companies like OpenAI and DeepMind. In general the portfolio picking at those has done well. It’s done better than the market because as everyone can see there’s been an AI boom but it’s obviously far short of what you would get if you predicted this is going to go to be like on the scale of the global economy and the global economy is going to be skyrocketing into the stratosphere within 10 years. If that were the case then collectively, these AI companies should be worth a large fraction of the global portfolio. So I embrace the criticism that this is indeed contrary to the efficient market hypothesis. I think it’s a true hypothesis that the market is in the course of updating on in the same way that coming into the topic in the 2000s that yes, they’re the strong case even an old case the AI will eventually be biggest thing in the world it’s kind of crazy that the investment in it is so small. Over the last 10 years we’ve seen the tech industry and academia realize that they were wildly under investing in just throwing compute and effort into these AI models. Particularly like letting the neural network connectionist paradigm languish in an AI winter. I expect that process to continue as it’s done over several orders of magnitude of scale up and I expect at the later end of that scale which the market is partially already pricing in it’s going to go further than the market expects.
> Dwarkesh Patel 02:32:28
Has your portfolio changed since the analysis you did many years ago? Are the companies you identified then still the ones that seem most likely to benefit from the AI boom?
> Carl Shulman 02:32:44A general issue with tracking that kind of thing is that new companies come in. Open AI did not exist, Anthropic did not exist. I do not invest in any AI labs for conflict of interest reasons. I have invested in the broader industry. I don’t think that the conflict issues are very significant because they are enormous companies and their cost of capital is not particularly affected by marginal investment and I have less concern that I might find myself in a conflict of interest situation there.
And chemistry??? Its mostly brought into the picture to talk about stoichiometry, the study of the rate and equilibria of chemical reactions. Still, what?
For what it’s worth, deconfusing my patchy chemistry understanding from secondary school by reading proper university chemistry and biochemistry books paid off. For example, my chemistry class hadn’t given me deep enough appreciation for the fact that sometimes a reaction might be thermodynamically favorable, but has horrible kinetics. The same is true for evolution in a lot of places, but I have not seen a lot of people using that intuition. Like in some sense that is obvious, but in another sense I think in a lot of subjects including evolution, econ and learning I was applying that intuition inconsistently, and I definitely didn’t make the distinction that clearly. Since chemistry has explicit energies there that you can calculate, it’s easier to not commit type errors.
I noticed the following paragraphs go into more detail about how the sources relate to the claim. So my example wasn’t well chosen.
I stopped using google as my default search engine and use brave search instead now. Googles AI summary is worse than useless. The first example I tried perfectly illustrates my point. The first paragraph of their AI summary links to 8 different sources. How do those 8 sources relate to the claim? I have no way of knowing without reading all 8 sources. Also, the AI summary takes a longer time to load than the main search results, and it’s lazy loading animation is distracting. I could not find any way to turn it off.
Meanwhile, with brave search I can turn off the AI summary, although I didn’t feel like I had to because the summary was adding value by making it easy to see how the claim was related to the source (sorry no image included, because I don’t know how to take screenshots of mouse-hover-over features like that, which tend to close when you screenshot them). Meanwhile, I haven’t noticed large quality differences between brave search and google. I also tried Kagi, but I could not find any quality differences compared to brave search (although I also didn’t explicitly create a benchmark for myself). If I try to find something so obscure it isn’t indexed on brave search, I mostly use GPT-5 with search or deep research enabled.
For all Arbital content, there is the Arbital scrape index. Most (all?) of that material has been incorporated into Lesswrong’s concept pages.
It is hard to do as a prefix in German, I think. It sounds a bit antiquated to me, but you could try “Jung war X”. But yes, in general, I think you are going to run into problems here because German inflects a lot of words based on the gender.
Your German also gives away the gender. Probably use some language model to double check your sentences.
I queried my brain (I am German) and noticed my claim doesn’t predict the result. Then checked online and I had male and female backwards from what I read in a dictionary once
After checking random words I noticed the bias is the other way around and female is more likely. Google gave me the same. Now I am confused.
I don’t find it surprising. For example, IIRC in German 1⁄2 of nouns are male, 1⁄3 is female, 1⁄6 is neuter. I’d expect correlations/frequencies in English and other European languages, but harder to spot if you don’t have gendered nouns.
In the spirit of “All stable processes we shall predict, all unstable processes we shall control.” I was thinking about how you would control the weather and earthquakes. One big problem for both of these is convection. For example, earthquakes are powered by hot material from inside the earth being transported out. I noticed my day-to-day intuition had been really confused by convection in solids. Intuitively, moving mass around feels much more inefficient than conduction. I still don’t have great intuition for this, but one thing that helped was learning about the Rayleigh number (not to be confused with Reynolds number) which quantifies in which regimes convection rather than conduction dominates. Intuitively, the problem with conduction is that it works best if there are large energy distances locally, but the longer the distance the heat has to travel, the flatter the local heat gradient is going to be. Convection moves things in bulk, which works better with more volume. Heat transfer is so slow inside the earth, a large part of earths internal heat is still left over from the potential energy released by its formation. So if heat wasn’t moved by all of the mantle collectively moving centimetres a year, even less heat would escape.
So...why, then? My working theory is that I was cursed by an ancient totem I touched as a child, but I’m open to suggestions.
Your active search and your writing are both selecting for smart non-conformists. Sanity and integrity might even be correlated with those, but not strongly enough.
I have a blog, sure, but is it really that weird to like learning about some science stuff and sometimes tell people about what you learned?
It is, in fact, deeply weird. If you know a bunch of bloggers who write good posts like yourself, I would love to know your recommendations, though. I already knew 3 out of 4 of the YouTube channels you mentioned, which made me more pessimistic about how much high-quality, easy-to-digest material I might be missing. Your posts on chemistry and engineering revived some curiosity that had been withering. Since then, I’ve learned a lot more basic chemistry, biochemistry and geology, sometimes just because I was curious what things are made of and why, so thank you!
I am in the waiting room at the doctor and the ~1 year old child next to me is scrolling through YouTube shorts on his mother’s phone with the mom watching along. Incredibly incoherent AI slop. Not an expert at early development, but this seems very suboptimal. I know that scrolling is not good for my reward system, but a 1-year old? At least put some random video on instead of letting the child scroll? If I was in charge at Google and was living by “don’t be evil”, I would maybe make a classifier to identify children scrolling like this and giving a reminder once in a while to suggest some alternative activities to the parent?
Thanks for writing this post! I do think some people are pursuing interp for this “wrong reason” of trying to prevent scheming, and the road where you get interp to improve that well to make that work seems unlikely (understanding general circuits doesn’t seem impossible to me but extremely hard and nonzero people are working on this).
I think that perhaps the mistake comes from mistaking the simplicity of the optimizer for a property of the mesa-optimizer. SGD by backprop is one algorithm so people put a single label, “deep learning,” on all models it produces. But there is no reason that all of these models must use similar circuits. They may all use an array of unique fantastically complex circuits. Understanding every circuit that can be produced by SGD at once is not a cohesive research program, and it is not a plan that will succeed.
There would be reasons to believe that models are going to use similar algorithms if they use similar training data. Understanding every circuit that is possible to be produced by SGD given “infinite training data” seems intractable, but “in practice” I’d expect different algorithms produced by SGD to produce modular structures with common “motifs” just like evolution does. Evolutionary developmental biology is indeed a field (that just like interp seems more bottlenecked on better theory rather than measurement capabilities). It’s why I am still excited about developmental interpretability, even though I don’t have a coherent plan for how it will help us with safety beyond “more theory and foundations” seems nice (with the general caveat that I am very confused about capability externalities, but this seems kind of unavoidable for actually broad insights).
The fact that this type of thing tends to get such large emotional responses out of people makes me wonder to what extent rendering counterfactuals would be useful to combat the lack of imagination for big decisions that actually are ahead of someone?
I would get a vaccine again if I thought I was at the risk of getting it?
Thanks for spotting! Fixed!
Nucleosomes
help of course, but I was thinking of
Topoisomerase, which untangles DNA. My understanding is if you pull separate strands, Topoisomerase finds the local the crossing points that are under pressure and unties them by cutting and gluing. After being cut and before being glued, the DNA stays attached to the Topoisomerase, so the double strand doesn’t just fall apart.