Sure, agreed. I had severe chronic migraines for well over a decade, which required a lot of little patchwork solutions and some random dugs. I eventually figured these things out. But if I had tried one new thing a day to solve them, then I would’ve saved myself a lot of pain.
Algon
Yeah, the wobbly chair story is a better example. Somehow, I feel more satisfied with it. Perhaps because it is a basically complete solution, for so little work?
But you fix the wobbly chairs so you can build up momentum to fix the dopamine addiction. And I’m not sure if I made this clear, but the stuff I tried for fixing my dopamine addiction did each help a bit, and now I know that if I really want to, I can stack them together to reset my dopamine system. Once done, it is a lot easier to continue to pause it.
So in that sense, these are permanent wins which have reduced the total amount of willpower I need to exert to partially fix my dopamine addictions.
I wonder what the collaborative process was like, who wrote what. Eliezer’s typical writing is...let’s go with “abrasive.” He thinks he’s smarter than you, he has the chutzpah to be right about that far more often than not, and he’s unrepentant of same, in a manner that outrages a large fraction of primates. That tone is entirely absent from IABIED. I wonder if a non-trivial part of Nate’s contribution was “edit out all the bits of Eliezer’s persona that alienate neurotypicals,” or if some other editor took care of that. I’m pretty sure someone filtered him; when, say, the Example ASI Scenario contains things like (paraphrased) “here’s six ways it could achieve X; for purposes of this example at least one of them works, it doesn’t matter which one” I can practically hear Eliezer thinking “...because if we picked one, then idiots would object that “method Y of achieving X wouldn’t work, therefore X is unachievable, therefore there is no danger.” And then I imagine Nate (or whoever) whapping Eliezer’s key-fingers or something.
In the interview sessions for people who pre-ordered the book, Nate said the writing processes involved multiple rounds of Eliezer writings waaaay too much stuff followed by Nate cutting it down by 3x. Some of the left-overs were re-used in the online supplements.
EDIT: Thank you for writing this review. It is basically what I would’ve said if I had to write a review.
I believe you you when you say that people output their true beliefs and share what they’re curious about w/ the chabot. But I don’t think it writes as if it’s trying to understand what I’m saying, which implies a lack of curiosity on the chatbot’s part. Instead, it seems quite keen to explain/convince someone of a particular argument, which is one of the basins chatbots naturally fall into. (Though I do note that it is quite skilful in its attempts to explain/convince me when I talk to it. It certainly doesn’t just regurgitate the sources.) This is often useful, but it’s not always the right approach.
Yeah, I see why collecting personal info is important. It is legitimately useful. Just pointing out the personal aversion I felt at the trivial inconvenience to getting started w/ the chatbot, and reluctance to share personal info.
(I think our bot has improved a lot at answering unusual questions. Even more so on the beta version: https://chat.stampy.ai/playground. Though I think the style of the answers isn’t optimal for the average person. It’s output is too dense compared to your bot.)
asking someone about their existing beliefs instead and then listening and questioning them later on when you’ve established sameness
This is true. And deep curiosity about what someone’s actual beliefs are helps a great deal in doing this. However, modern LLMs kinda aren’t curious about such things? And it’s difficult to get them in the mindset to be that curious. Which isn’t to say they’re incurious—it just isn’t easy to get them to become curious about an arbitrary thing. And if you try, they’re liable to wind up in a sycophancy basin. Which isn’t conducive to forming true beliefs.
Avoiding that Syclla just leads you to the Charybdis of the LLMs stubbornly clinging to some view in their context. Navigating between the two is tricky, much less managing to engender genuine curiosity.I say this because AI Safety Info has been working on a similar project to Mikhail at https://aisafety.info/chat/. And while we’ve improved our chatbot a lot, we still haven’t managed to foster its curiosity in conversations with users. All of which is to say, there are reasons why it would be hard for Mikhail, and others, to follow your suggestion.
EDIT: Also, kudos to you Mikhail. The chatbot is looking quite slick. Only, I do note that I felt aversive to entering my age and profession before I could get a response.
Two categories are equivalent when they are isomorphic up to an isomorphism
Thaaaaat is a confusing sentence. But thankfully, the rest of your comment clears things up.
Why do you want this notion of equivalence or adjunction, rather than the stricter notion of isomorphism of categories?
Right, if we rely on this notion of “the real identity”. I think discussing that would get into even more confusing territory than just focusing on formalizations that look like they’re some kind of equality.
Feature incentivizing grabbing attention.
Wait, “horizons clitch’”? What the heck is that apostrophe doing there? Was that intentional?
And oh yeah, the bottomless pit greentext. That was pretty impressive.
Besides isomorphisms and equality of objects, do category theorists use other notions of “equality”?
I’m confused. In any FOL, you have a bunch of “logical axioms” which come built-in to the language, and axioms for whatever theories you want to investigate in said language. You need these or else you’ve got no way prove basically anything in the language, since your deduction rules are: state an axiom from your logical axioms, from your assumed theory’s axioms, or Modus Ponens. And the logical axioms include a number of axioms schemas, such as the ones for equality that I describe, no?
Actually you have just described the same thing twice. There are actually fewer distance-preserving maps than there are continuous ones, and restricting to distance-preserving maps removes all the isomorphisms between the sphere and the cube.
That is a very good point. Hmm. So it seems just plain false that you can break equivalence between two objects by enriching the number of maps between them?
Sorry, I used the word “definition” sloppily there. I don’t think we disagree with each other.
I meant something closer to “how equality is formalized in first order logic”. That’s what the bit about the axiom schemas was referencing: it’s how we bake in all the properties we require of the special binary predicate “=”. There’s a big, infinite core of axiom schemas specifying how “=” works that’s retained across FOLs, even as you add/remove character, relation and function symbols to the language.
Yeah, that sure does seem related. Thinking about it a bit more, it feels like equality refers to a whole grab-bag of different concepts. What separates them, what unites them and when they are useful are still fuzzy to me.
Thank you, that is clearly correct and I’m not sure why I made that error. Perhaps because equivalence seems more interesting in category theory than in set theory? Which is interesting. Why is equivalence more central in category theory than set theory?
No worries! For more recommendations like those two, I’d suggest having a look at “The Fast Track” on Sheafification. Of the books I’ve read from that list, all were fantastic. Note that site emphasises mathematics relevant for physics, and vice versa, so it might not be everyone’s cup of tea. But given your interests, I think you’ll find it useful.
Started reading [Procesi] to learn invariant theory and representation theory because it came up quite often as my bottleneck in my recent work (eg). Also interpretability, apparently. So far I just read pg 1-9, reviewing the very basics of group action (e.g., orbit stabilizer theorem). Lie groups aren’t coming up until pg ~50 so until then I should catch up on the relevant Lie group prerequisites through [Lee] or [Bredon].
Woit’s “Quantum Theory, Groups and Representations” is fantastic for this IMO. It gives physical motivation for representation theory, connects it to invariants and, of course, works through the physically important lie-groups. The intuitions you build here should generalize. Plus, it’s well written.
Also, if you are ever in the market for differential topology, algebraic topology, and algebraic geometry, then I’d recommend Ronald Brown’s “Topology and Groupoids.” It presents the basic material of topology in a way that generalizes better to the fields above, along with some powerful geometric tools for calculations.Both author’s provide free pdfs of their books.
But they are doing things that they believe introduce new, huge negative externalities on others without their consent. This rhymes with a historically very harmful pattern of cognition, where folks justify terrible things to themselves.
Secondly, who said anything about Pausing AI? That’s a separate matter. I’m pointing at a pattern of cognition, not advocating for a policy change.
Bjartur Tomas asked me the same thing. I told him I thought it was a reference to Daniel Dennet. That just baffled him. Honestly, I think I just noticed the vibes kinda matched (consciousness philosopher, humorous text about consciousness) so I assumed that there had to be a Dennet joke in there somewhere. But no. Bjartur Tomas then told me what DANNet was really referencing. An arbitrary NN he found w/ about the same synapse count as a shrimp. (It’s the first pure deep CNN to win computer vision contests, circa 2011.)
Thanks for the rec! I find it amusing because I stumbled onto this series the other day. Nothing is ever a coincidence, so surely the Algorithm is trying to tell us something. But what?