How I operationalize crash-only code in my data generation code, given that Data Denormalization Is Broken :
When operating on database data, I try to make functions whose default behavior on each invocation is to re-process large chunks of data and regenerate all the generated values, and make it idempotent. (I would regenerate the whole database on every invocation if I could, but there’s some tradeoff of how big a chunk is sufficiently fast to reprocess.)
Seems very plausible to me. Thanks for sharing.
A related question is why the topic of GoF research still didn’t get much LW discussion in 2020
Bravo, this is on the meta level a great example of applying epistemic rationality to replace a vague concept with better concepts. The post uses specific examples everywhere to be clearly understandable and easy to apply. It could be part of my specificity sequence, with a title like “The Power to Clarify Concepts”.
The achievement of easiness is due to the use of specific examples everywhere.
“Bad names make you open the box” is in multiple ways a special case of the more general principle that “Good system architecture is low-context” or “Good system architecture has a sparse understanding-graph”.
If we imagine a graph diagram where each node N representing a part of the system (e.g. a function in a codebase) has edges coming in from all other nodes that one must understand in order to understand N, then a good low-context architecture is one with the fewest possible edges per node.
The post talks about how a badly-named function causes there to be an understanding-edge from the code inside that function to that function. More generally, a badly-architected function requires understanding other parts of the system in order to understand what it does. E.g.:
If the function mutates a global state variable, then the reader must understand outside context about that variable’s meaning in order to understand the function
If the function does a combination of work that only makes sense in the context of your program—rather than being a more program-independent reusable part—then its understanding-graph will have extra edges to various other parts of your program. Or in the best case, where your function is well-documented to avoid imposing those understanding-edges on the reader, you’re still adding extra edge weight from the function to the now-longer-winded docstring.
The “sparse understanding-graph” is also applicable to org charts of people working together. You ideally want the sparsest possible cooperation-graph.
Ya I don’t know the details even though I use NodeJS almost every day :) Maybe it does run parallel requests in separate threads.
Agree with #3, presenting definitions with examples first.
Congrats on this research, feels like you’re onto something huge!
Re database normalization, it’s obviously good to do if you can afford the hit for speed and scalability. Unfortunately I believe the software industry currently has a big problem with a lack of capable databases to support elegant data denormalization patterns: https://lironshapira.medium.com/data-denormalization-is-broken-7b697352f405
NodeJS is mostly cool because you can use the same language and the same development tools across your whole stack. When it launched I think another selling point was that it’s reasonably good at handling multiple requests in parallel.
Upvoted for teaching concepts well by using specific and concrete examples, even when the concepts are ironically “generalization” and “abstraction”
I experienced Landmark Forum 13 years ago and this post is a good summary of it.
It seems like they’ve settled on a bunch of heuristic mental models to (1) push people to change their state to potentially break out of old patterns and make life changes and (2) perpetuate the organization.
They don’t provide good quality explanations and answers to questions. They don’t hold themselves to the standards of productive discourse. They offer a shell of pre-generated heuristics for you to “try on” (their phrase). They admit that that’s what they’re giving you, but I think for the LW crowd it wouldn’t be that hard to have a version of Landmark offering more robust concepts and tools.
Thanks for writing this. I just read the book and I too found Part I to be profoundly interesting and potentially world-changing, while finding Parts II and III shallow and wrong compared to the AI safety discourse on LessWrong. I’m glad someone took the time to think through and write up the counterargument to Hawkins’ claims.
Reminds me of that old LW April Fools where they ran the whole site as a Reddit fork
RelationshipHero.com—convenient dating, relationship and couples coaching over Zoom
Clubhouse being valued at $1B by Andreessen Horowitz in the latest funding round implies that they also think it has a >10% chance of being a major success.
The biggest signal they’re looking at is the growth rate: it has over 10M users and is still growing at 10%/wk, which is in the absolute top tier of startup growth metrics.
I think Clubhouse will probably have 50-100M users in a few months, and have acted on this prediction by dedicating a full-time marketing person to building my company’s presence on it.
It seems clear to me that the percentage of days worked remotely will never go back to anything less than double the pre-pandemic value, at least
Upvoted for providing an important deepening of the popular understanding of “Schelling point”
More generally, “portray yourself as an empathetic character” is a social skill I find myself using often. Basically copy the way the protagonists talk on This American Life, where even the ones who’ve done crazy things tell you their side of the story in such a way that you think “sure, I guess I can relate to that”.