I do something similar. I consistently massively underestimate the inferential gaps when I’m talking about these things, and end up spending half an hour talking about tangential stuff the Sequences explain better and faster.
Smaug123
It’s more of a tactic to make sure people don’t think “hey, another crackpot organisation” if they haven’t already heard about them. I’m hoping to raise GWWC to the level of “worth investigating for myself” in this post.
True, though the decision of who is most cost-effective does remain for you to decide.
On introductory non-standard analysis, Goldblatt’s “Lectures on the hyperreals” from the Graduate Texts in Mathematics series. Goldblatt introduces the hyperreals using an ultrapower, then explores analysis and some rather complicated applications like Lebesgue measure.
Goldblatt is preferred to Robinson’s “Non-standard analysis”, which is highly in-depth about the specific logical constructions; Goldblatt doesn’t waste too much time on that, but constructs a model, proves some stuff in it, then generalises quite early. Also preferred to Hurd and Loeb’s “An introduction to non-standard real analysis”, which I somehow just couldn’t really get into. Its treatment of measure theory, for instance, is just much more difficult to understand than Goldblatt’s.
Thanks very much for this! I’ve written a lot of stuff on there (I’m the Patrick Stevens whose name is splatted all over the screenshot). I asked them a year ago (ish) whether I could have a data dump, and they said it was Too Difficult; and I didn’t bother scraping it myself. I’m glad you actually went and did something about it!
I started Anki-ing everything. Previously, I’ve used Anki for very specific purposes (e.g. “learn the London Underground network” or “learn all the capitals of the world”). New decks this month, though, include “Jokes”, “Legal Systems Very Different From Ours”, “Tao Te Ching”, and “Logical Induction”. I’m pretty optimistic that “read something really worthwhile, Anki it up” is becoming a habit.
I stopped taking the book seriously when I reached Walker’s suggestion that teenagers might have a sleep cycle offset from adults because “wise Mother Nature” was giving them the chance to develop independence from the tribe, in a group of their peers, and that this was an important stage in societal development of a human.
If one *must* find an evo-psych explanation for this phenomenon, surely “we need people guarding the camp at more hours of the day” is simpler and less ridiculously tenuous. (Though this still has precisely the same “I could have explained anything with this” flavour that most popular evo-psych does.)
You can buy good tomatoes (in the UK); they’re just a bit expensive. Cheap tomatoes are nasty, but nice tomatoes are widely available; I get them from a company called Isle of Wight Tomatoes, and they’re on Ocado.
Nitpick: I think there’s a minor transcription error, in that “biological-esque risk” should read “biological X-risk”.
My immediate reaction is that I remember hating it very much at school when a teacher punished the entire class for the transgression of an unidentifiable person!
Strong +1 to the idea; I’ll be on a different team, but I strongly encourage people to give it a try. I think Hunt 2019 was quite possibly the most fun I have ever had.
(Posting this in a spirit of self-congratulation: I wrote up a spiel about what I found confusing, and then realised that I’m confused on a much more fundamental level about the nature of the various explanations and how they relate to each other, and am now going back to reread the various sources rather than writing something unhelpfully confusing about a confused confusion.)
For some years now I have had a Panasonic breadmaker, model SD-ZB2512. It takes less than five minutes in the evening, generating no mess and no washing up (if you use olive oil instead of butter, so as to avoid generating a fatty knife), and you can have hot fresh bread ready-baked as you wake up. The only downside to bread made this way is that you have to slice it. It tastes dramatically better than all but the most expensive shop-bought bread, and the ingredients store in a cupboard for literally months so it’s even highly pandemic-proof. Bread that is still hot from the breadmaker is really one of the best foods I know. The breadmaker has literally no cost to upkeep: you don’t even need to clean it, as it’s basically an oven in a pot.
For some reason I can’t find any relevant hits with Google, but I’ve heard “support vs advice” described as “sympathy or fascism” before. “I want to moan at you” vs “I want you to take over and solve my problem”.
I will pick a rather large nit: “for example a web server definitely doesn’t halt” is true, but for this to be surprising or interesting or a problem for Turing reasons, it just means you are modelling it incorrectly. Agda solves this using corecursion, and the idea is to use a data type that represents a computation that provably never halts. Think infinite streams, defined as “an infinite stream is a pair, $S_0 = (x, S_1)$, where $S_1$ is an infinite stream”. This data type will provably keep producing values forever (it is “productive”), and that’s what you want for a web server.
Indeed, this is what I use. It feels much more natural to me in the following case, where obviously our statement is not a question:
Dr Johnson kicked a large rock, and said, as his foot rebounded, “Do I refute it thus?”.
And “obviously” the full stop should go outside, because of:
Dr Johnson kicked a large rock, and said, as his foot rebounded, “Do I refute it thus?”, howling with pain.
And there’s nothing special about a question mark, so this rule should be identical if a full stop is substituted.
Fittingly, I… don’t think those words actually identify sazen :P I claim that “the thing you get if you do not take inferential distance into account” for most people would be baffled non-comprehension, not active misunderstanding.
(Or more concretely, Grand Central Station wasn’t a Schelling point in New York before it was built. Before that time, presumably there were different Schelling points.)
By the way, you’re making an awful lot of extremely strong and very common points with no evidence here (“ChaosGPT is aligned”, “we know how to ensure alignment”, “the AI understanding that you don’t want it to destroy humanity implies that it will not want to destroy humanity”, “the AI will refuse to cooperate with people who have ill intentions”, “a system that optimises a loss function and approximates a data generation function will highly value human life by default”, “a slight misalignment is far from doomsday”, “an entity that is built to maximise something might doubt its mission”), as well as the standard “it’s better to focus on X than Y” in an area where almost nobody is focusing on Y anyway. What’s your background, so that we can recommend the appropriate reading material? For example, have you read the Sequences, or Bostrom’s Superintelligence?
I’d frame it as “Nick Bostrom needs Jeeves. Are you Jeeves?” (After P.G. Wodehouse’s Jeeves and Wooster.)