links 4/8/26: https://roamresearch.com/#/app/srcpublic/page/04-08-2026
https://benjaminrosshoffman.com/telescopes-need-good-lenses/ this amounts to “yes, I think I am a better parent than liberals/progressives.” I am unconvinced.
One of his examples is that his kids are in the 99th percentile for height and weight despite not being visibly fat, thus he thinks that other parents are underfeeding their children. This doesn’t make sense. Child obesity is in fact a significant problem; on average, US parents probably feed their kids too many calories, not too few.
The fact is that—I don’t know why, maybe measurement error at the doctor’s office, which is common—it’s not that unusual for kids to be “outliers” in size. Anecdotally it seems to me like “99th percentile” large or “1st percentile” small kids (who appear neither fat nor scrawny) are a lot more than 2% of the population. the thing is, the official distribution of heights and weights for kids is very narrow, so a “99th percentile” kid is not that enormous, and a 1st percentile kid is not that tiny. Plus, kids’ growth rates change enough over time that they usually grow out of “outlier” status. IMO, Ben’s kids are normal-sized, their “99th percentile” status may very well be random fluctuation unrelated to how he feeds them, and he is really jumping to conclusions from inadequate data to conclude that most of his peers underfeed their kids.
I’ll grant him that being better at cooking tasty food might make kids less likely to be picky eaters, but there’s a lot of noise in that as well (I’m a pretty good cook and my kids often don’t like what grownups think is delicious). And I also am skeptical that liberals are worse cooks than conservatives on average in the US. (remember, the OP Scott post was about politics!)
I have less objection to his overall point, which is a critique of altruism itself. Yes, people will never be as careful or as grounded, as effective, in helping others, if they’re motivated primarily by getting good-boy/good-girl points, as if their practical self-interest is entangled.
I’m not sure that I buy that EA-style or progressive-style altruism is outright bad; for the most part, i think people aren’t harming themselves by doing it, and even if they help suboptimally they may still be helping a little. I see people as mostly putting themselves first, with a little ineffectual flavor text to make them feel good about themselves, and maybe also a little genuine help going to people who need it, and I don’t see a problem with the flavor text or the modest amount of help. I think charity is not obligatory and not the most important part of morality, but it’s probably net good; and I’m not too fussy about whether people rationalize/exaggerate how good they are, if it makes them feel better. (I also think building AI is cool and have no moral objection to it, so I don’t hold EAs blameworthy there either.)
I don’t like the idea that we evaluate the goodness of “communities” by how “functional” they are, including via measuring fertility—yes, in an evolutionary sense, fertility is “fitness”, but that doesn’t tell me why i should care about it. I don’t want to be Satmar or Amish. If they are “virtuous”, so much the worse for virtue!
https://icmi-proceedings.com/ICMI-E-virtue-under-pressure.html Tim Hwang is trying to make AIs with Christian values. this freaks me out, even though i suppose it shouldn’t (Christians are not so overwhelmingly dominant worldwide that we wouldn’t also have secular AIs)
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted entertaining and thorough profile of Sam Altman, including a very well documented pattern of people saying he lies a lot and can’t be trusted, going back to his YCombinator days.
https://nymag.com/intelligencer/article/why-citrini-research-sent-an-analyst-to-the-strait-of-hormuz.html I gotta admire a finance firm that sends a researcher to an active war zone to get firsthand data.
links 4/20/26: https://roamresearch.com/#/app/srcpublic/page/04-20-2026
https://www.lieder.net/lieder/get_text.html?TextId=91385 this is the song in Hilary Mantel’s Thomas Cromwell trilogy
https://en.wikipedia.org/wiki/Nicholas_Culpeper the herbalist was related to the earlier courtier Thomas Culpeper
https://thezvi.substack.com/p/on-dwarkesh-patels-podcast-with-nvidia satisfying rundown, I agree with Zvi Mowshowitz on all points, as i often do
https://www.lesswrong.com/posts/WewsByywWNhX9rtwi/current-ais-seem-pretty-misaligned-to-me this seems weak to me. the kind of “misalignment” that is “AI coding agents are too eager to say they’ve solved your problem when they really haven’t” is a.) exactly the sort of thing AI labs are economically incentivized to fix, and b.) kind of unreasonable to complain about?! if you are letting your agents completely off the leash on open-ended projects, they are going to screw things up, and you will need some kind of supervisory process. it seems to me like serious professionals already know that and are successfully compensating for it? if you think of the models as TOOLS YOU USE, which of course have flaws and limits, but are trending in the right direction over the course of months and years, you tend to have a better time with them and make predictions more in line with reality...
https://www.verysane.ai/p/alignment-is-proven-to-be-tractable i’m not sure i agree with the *title* but i agree with the essay, i was surprised by how little of a thing “brittleness” is in large 2020s models (in the sense of “AI model does something very different from what the user intended” or “model doesn’t generalize to slightly different real-world scenarios from the training data”). language seems to really help with that!
https://en.wikipedia.org/wiki/List_of_human_cell_types red blood cells are by far the most common cell type in the human body