Nitpick: I think there’s a minor transcription error, in that “biological-esque risk” should read “biological X-risk”.
You’re thinking of “Glomarisation” (https://en.wikipedia.org/wiki/Glomarization).
See, for example, https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases and https://www.lesswrong.com/posts/bP5sbhARMSKiDiq7r/consistent-glomarization-should-be-feasible.
I’m a big believer in “the types should constrain the semantics of my program so hard that there is only one possible program I could write, and it is correct”. Of course we have to sacrifice some safety for speed of programming; for many domains, being 80% sure that a feature is correct in 95% of the possible use cases is good enough to ship it. But in fact I find that I code *faster* with a type system, because it forces most of the thinking to happen at the level of the problem domain (where it’s easy to think, because it’s close to real life); and there are a number of ways one can extremely cheaply use the type system to make invalid states unrepresentable in such a way that you no longer have to test certain things (because there’s no way even to phrase a program that could be incorrect in those ways).
For a super-cheap example, if you know that a list is going to be nonempty, use a non-empty list structure to hold it. (A non-empty list can be implemented as a pair of a head and a list.) Then you can save all the time you might otherwise have spent on coding defensively against people giving you empty list inputs, as well as any time you might have spent testing against that particular corner case.
For another super-cheap example that is so totally uncontroversial that it probably sounds vacuous (but it is in fact the same idea of “represent what you know in the type system so that the language can help you”), don’t store lists of (key, value); store a dictionary instead, if you know that keys are unique. This tells you via the type system that a) keys are definitely unique, and b) various algorithms like trees or hashmaps can be used for efficiency.
I believe the world is this way because of the following two facts:
monads are very hard to get your head into;
monads are extremely simple conceptually.
This means that everyone spends a long time thinking about monads from lots of different angles, and then one day an individual just happens to grok monads while reading their fiftieth tutorial, and so they believe that this fiftieth tutorial is The One, and the particular way they were thinking about monads at the time of the epiphany is The Way. So they write yet another tutorial about how Monads Are Really Simple They’re Just Burritos, and meanwhile their only actual contribution to the Monad Exposition Problem is to have very slightly increased the number of paths which can lead an individual to comprehension.
I’m interested in your comment about “using dynamic-untyped rather than well-typed because it helps you not worry about your own intelligence”. I use well-typed languages religiously precisely for that reason: I’m not smart enough to program in an untyped language without making far too many mistakes, and the type system protects me from my idiocy.
You can buy good tomatoes (in the UK); they’re just a bit expensive. Cheap tomatoes are nasty, but nice tomatoes are widely available; I get them from a company called Isle of Wight Tomatoes, and they’re on Ocado.
I stopped taking the book seriously when I reached Walker’s suggestion that teenagers might have a sleep cycle offset from adults because “wise Mother Nature” was giving them the chance to develop independence from the tribe, in a group of their peers, and that this was an important stage in societal development of a human.
If one *must* find an evo-psych explanation for this phenomenon, surely “we need people guarding the camp at more hours of the day” is simpler and less ridiculously tenuous. (Though this still has precisely the same “I could have explained anything with this” flavour that most popular evo-psych does.)
I’ve had experiences ranging from “great” to “terrible” when pairing. It’s worked best for me when I’m paired with someone whose skills are complementary to mine. Concretely: I’m very much about rigour, type-safety, correctness; the person I have in mind here is a wizard at intuiting algorithms. The combination worked extremely well: the pairer generated algorithms, and I (at the keyboard) cast them into safe/correct forms.
However, when paired with someone who eclipses me in almost every dimension, I ended up feeling a bit bad that I was simply slowing us down; and conversely I’ve also experienced pairing with someone who I didn’t feel was adding much to the enterprise, and it felt like I was coding through treacle (because the thoughts had to flow out to another person, rather than into the compiler).
In my experience, good pairs are really good, but also quite rare. You’re looking for a certain kind of compatibility.
To answer your actual question: just try it! It’s cheap to try, and you can find out very quickly if a certain pairing is not for you. (I would certainly start the exercise by making sure both parties know that “this pair isn’t working out” is not a judgement on either party.)
I started Anki-ing everything. Previously, I’ve used Anki for very specific purposes (e.g. “learn the London Underground network” or “learn all the capitals of the world”). New decks this month, though, include “Jokes”, “Legal Systems Very Different From Ours”, “Tao Te Ching”, and “Logical Induction”. I’m pretty optimistic that “read something really worthwhile, Anki it up” is becoming a habit.
A formative experience in my attitude to magic was when I saw an excellent sleight-of-hand magician performing to my small group of friends (waiting in a line for an event). He was very convincing and great fun; but there was a moment in the middle of his series of tricks when my attention was caught by something else in the distance. When I looked back after five seconds of distraction, he was mid-trick; and I saw him matter-of-factly take a foam ball from his hand, put it into his pocket, and then open his hand to reveal no foam balls—to general astonishment. All his other tricks, before and after, I found completely convincing.
Accordingly, I grok that there’s an entire art of doing incredibly obvious things in such a way that the viewer doesn’t understand that something obvious has happened. That’s one of the main things I want to learn from magic: how to perform trivial bullshit very convincingly (e.g. by knowing how to direct the viewer’s attention).
Thanks for the tip about performing repeatedly to new groups. Now that you mention it, it’s extremely obvious, but I don’t think I’d have come up with that myself.
Thanks very much for this! I’ve written a lot of stuff on there (I’m the Patrick Stevens whose name is splatted all over the screenshot). I asked them a year ago (ish) whether I could have a data dump, and they said it was Too Difficult; and I didn’t bother scraping it myself. I’m glad you actually went and did something about it!
On introductory non-standard analysis, Goldblatt’s “Lectures on the hyperreals” from the Graduate Texts in Mathematics series. Goldblatt introduces the hyperreals using an ultrapower, then explores analysis and some rather complicated applications like Lebesgue measure.
Goldblatt is preferred to Robinson’s “Non-standard analysis”, which is highly in-depth about the specific logical constructions; Goldblatt doesn’t waste too much time on that, but constructs a model, proves some stuff in it, then generalises quite early. Also preferred to Hurd and Loeb’s “An introduction to non-standard real analysis”, which I somehow just couldn’t really get into. Its treatment of measure theory, for instance, is just much more difficult to understand than Goldblatt’s.
True, though the decision of who is most cost-effective does remain for you to decide.
It’s more of a tactic to make sure people don’t think “hey, another crackpot organisation” if they haven’t already heard about them. I’m hoping to raise GWWC to the level of “worth investigating for myself” in this post.
I do something similar. I consistently massively underestimate the inferential gaps when I’m talking about these things, and end up spending half an hour talking about tangential stuff the Sequences explain better and faster.
I’d frame it as “Nick Bostrom needs Jeeves. Are you Jeeves?”
(After P.G. Wodehouse’s Jeeves and Wooster.)