I think that from like +6.3std the heavytail becomes even a lot stronger because those people can bootstrap themselves extremely good mental software.
I agree something like this happens, I just don’t think it’s that strong of an effect.
I think one me::Tsvi::+7std person would probably be enough to put humanity on a path to success (given Tsvi timelines), so the “repeatedly” criterion seems a bit off to me.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
for the few +6std people on earth it might just give +0.2std or +0.3std,
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
it’s sorta vice versa that extremely smart individuals might find ways to significantly leverage their capability
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
Would you be willing to elaborate on what you meant by “decoding” egregores? I’m semi-familliar with the term (checking my impression of understanding: egregore = self-sustaining semi-agentic meme running on the computational substrate of more than one human brain, for example a corporation) but I’m not clear on what decoding means here. Like trying to transcribe the egregore’s algorithm into something easily human-readable?
you can reflect on what’s happening—is it good or bad, how could it be done better or worse, should it be combatted, etc.
you can support it or combat it effectively if needed.
Further, there’s presumably healthy, humanity-aligned ways of participating in egregores (I mean the name is a bit scary, but like, some companies, governments, religious strains, traditions, norms, grand plans, etc., are good to participate in), or in other words effective, epistemic, Good shared-intentionality-weaving. This is an entire huge and fundamental missing sector of our philosophy. We might have to understand this better to make progress on hard things. Decoding obvious egregores would be a way in. As an example, I suspect there is some sequence of words, humanly producible, maybe with prerequisites (such as having a community backing you up, or similar), that would persuade most AGI researchers to just stop—but you might need more theory to produce those words.
That’s a very new-to-me take on getting AGI efforts to stop: understand and intervene directly on the egregore, rather than like, trying to influence individuals.
I’ll have to think about this.
It’s a pet cause of mine, to get as many people as I can off of the harmful social media platforms (which in my view is nearly all of them, weighted by readership). Possibly [considering “social media use” as an egregore, and considering how to interact with the egregore] might be more effective than my past efforts.
Your list of coded movements really rings-relevant to me—“second-order norm enforcement” made me immediately think of how people will vocally remark that you’re strange or ask why, if they learn you’re not on any social media that they’ve heard of. I suspect this mostly does not influence social-media-nonusers, but rather affects bystanders, erecting an additional barrier to exiting the egregore.
Thanks for the new mental model. Even if I end up not adopting it wholesale, it seems obviously full of useful parts!
Though to be clear, it seems very very difficult to me, like it might be at a vaguely comparable level of difficulty as “solving biology”. Which is part of why I’m not working on that directly, but instead aiming at technological human intelligence amplification.
Yeah, feels like it’s at a similar difficulty level as I’ve been experiencing trying to transcribe my own thought process as pseudocode. And I get the impression that few insights would be readily transferrable across different egregores, in which case each and every one might need its own individual effort.
Reminds me of the work that was done which caused the decline of the Ku Klux Klan: someone infiltrated, learned all the rituals and coded language used, then published that information—and that was all it took to cripple their power.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Do you have other evidence in mind?
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std. (EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)
Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).
I agree something like this happens, I just don’t think it’s that strong of an effect.
A single human still has pretty strong limitations. E.g. fixed skull size (without further intervention); other non-scalable hardware (~one thread of attention, one pair of eyes and hands); self-reprogramming is just hard; benefits of self-reprogramming don’t scale (hard to share with other people).
Coercion is bad; without coercion, a supergenius might just not want to work on whatever is strategically important for humanity.
It doesn’t look to me like we’re even close to being able to figure out AGI alignment, or other gnarly problems for that matter (such as a decoding egregores). So we need a lot more brainpower, lots of lottery tickets.
There’s a kind of power that comes from having many geniuses—think Manhattan project.
Not sure what you’re referring to here. Different methods have different curves. Adult brain editing would have diminishing returns, but nowhere near that diminishing.
Plausibly, though I don’t know of strong evidence for this. For example, my impression is that modern proof assistants still aren’t in a state where a genius youngster with a proof assistant can unlock what feels like the possibility of learning a seemingly superhuman amount of math via direct dialogue with the truth—but I could imagine this being created soon. Do you have other evidence in mind?
Would you be willing to elaborate on what you meant by “decoding” egregores? I’m semi-familliar with the term (checking my impression of understanding: egregore = self-sustaining semi-agentic meme running on the computational substrate of more than one human brain, for example a corporation) but I’m not clear on what decoding means here. Like trying to transcribe the egregore’s algorithm into something easily human-readable?
Yes, that’s basically what I mean. There’s a lot of coded movements. Examples of classes of examples:
dogwhistles
microaggressions
signaling, shibboleths
second- and higher-order norm enforcement (mocking non-enforcers of norms, etc.)
quorum sensing
performativity (playing dumb, performative lying, preference falsification, etc.)
hype / hyperstitioning
enthymemes
envisioning futures
anti-inductivity (e.g. cryptolects)
So you’d first of all want to decode this stuff so that
you can understand what’s even happening
you can reflect on what’s happening—is it good or bad, how could it be done better or worse, should it be combatted, etc.
you can support it or combat it effectively if needed.
Further, there’s presumably healthy, humanity-aligned ways of participating in egregores (I mean the name is a bit scary, but like, some companies, governments, religious strains, traditions, norms, grand plans, etc., are good to participate in), or in other words effective, epistemic, Good shared-intentionality-weaving. This is an entire huge and fundamental missing sector of our philosophy. We might have to understand this better to make progress on hard things. Decoding obvious egregores would be a way in. As an example, I suspect there is some sequence of words, humanly producible, maybe with prerequisites (such as having a community backing you up, or similar), that would persuade most AGI researchers to just stop—but you might need more theory to produce those words.
Interesting, and thanks for taking the time!
That’s a very new-to-me take on getting AGI efforts to stop: understand and intervene directly on the egregore, rather than like, trying to influence individuals.
I’ll have to think about this.
It’s a pet cause of mine, to get as many people as I can off of the harmful social media platforms (which in my view is nearly all of them, weighted by readership). Possibly [considering “social media use” as an egregore, and considering how to interact with the egregore] might be more effective than my past efforts.
Your list of coded movements really rings-relevant to me—“second-order norm enforcement” made me immediately think of how people will vocally remark that you’re strange or ask why, if they learn you’re not on any social media that they’ve heard of. I suspect this mostly does not influence social-media-nonusers, but rather affects bystanders, erecting an additional barrier to exiting the egregore.
Thanks for the new mental model. Even if I end up not adopting it wholesale, it seems obviously full of useful parts!
Though to be clear, it seems very very difficult to me, like it might be at a vaguely comparable level of difficulty as “solving biology”. Which is part of why I’m not working on that directly, but instead aiming at technological human intelligence amplification.
Yeah, feels like it’s at a similar difficulty level as I’ve been experiencing trying to transcribe my own thought process as pseudocode. And I get the impression that few insights would be readily transferrable across different egregores, in which case each and every one might need its own individual effort.
Reminds me of the work that was done which caused the decline of the Ku Klux Klan: someone infiltrated, learned all the rituals and coded language used, then published that information—and that was all it took to cripple their power.
I wish you all the luck re: human enhancement.
Basically agree, but I think alignment is the kind of problem where one supergenius might matter more. E.g. for general relativity Einstein basically managed to find in 3 times faster or sth than the rest of physics would’ve. I don’t think a Manhatton project would’ve helped there because even after Einstein published GR only relatively few people understood it (if i am informed correctly), and I don’t think they could’ve made progress in the same way Einstein did but would’ve needed more experimental evidence.
Plausible to me that there are other potentially pivotal problems that have something of this character, but idk.
Well not very legible evidence, and I could be wrong, but some of my thoughts on mental software:
It seems plausible to me that someone with +6.3std would be able to do some bootstrapping loop very roughly like:
find better ontology for modelling what is happening in my mind.
train to relatively-effortlessly model my thoughts in the new better ontology that compresses observations more and thus lets me notice a bit more of what’s happening in my mind (and notice pieces where the ontology doesn’t seem to fit well).
repeat.
The “relatively-effortlessly model well what is happening in my mind” part might help significantly for getting much faster and richer feedback loops for learning thinking skills.
When you have a good model of what happened in your mind to produce some output you can better see the parts that were useless and the parts that were important and see how you want your cognitive algorithms to look like and plan how to train yourself to shape them that way.
When you master this kind of review-and-improving really well you might be able to apply the skill on itself and bootstrap your review process.
It’s generally hard to predict what someone smarter might figure out so I wouldn’t be confident it’s not possible.
I agree that peak problem-solving ability is very important, which is why I think strong amplification is such a priority. I just… so far I’m either not understanding, or else you’re completely making up some big transition between 6 and 6.5?
Yeah I sorta am. I feel like that’s what I see from eyeballing the largest supergeniuses (in particular Einstein and Eliezer) but idk it’s very few data and maybe I’m wrong.
My guess would be that you’re seeing a genuine difference, but that flavor/magnitude of difference is not not very special to the 6 → 6.5 transition. See my other comment.
I think you’re massively overestimating Eliezer Yudkowsky’s intelligence. I would guess it’s somewhere between +2 and +3 SD.
Seems way underestimated. While I don’t think he’s at “the largest supergeniuses” level either, even +3 SD implies just top 1 in ~700 i.e. millions of Eliezer-level people worldwide. I’ve been part of more quantitatively-selected groups talent-wise (e.g. for national scholarships awarded on academic merit) and I’ve never met anyone like him.
But are you sure the way in which he is unique among people you’ve met is mostly about intelligence rather than intelligence along with other traits?
Wait are you saying it’s illegible, or just bad? I mean are you saying that you’ve done something impressive and attribute that to doing this—or that you believe someone else has done so—but you can’t share why you think so?
Maybe bad would be a better word. Idk I feel like I have a different way of thinking about such intelligence-explosion-dynamics stuff that most people don’t have (though Eliezer does) and I cannot really describe it all that well and I think it makes sensible predictions but yeah idk I’d stay sceptical given that I’m not that great at saying why I believe what I believe there.
No I don’t know of anyone who did that.
It’s sorta what I’ve been aiming for since very recently and I don’t particularly expect a high chance of success but I’m also not quite +6.3std I think (though I’m only 21 and the worlds where it might succeed are the ones where I continue getting smarter for some time). Maybe I’m wrong but I’d be pretty surprised if sth like that wouldn’t work for someone with +7std.
I mean, I agree that intelligence explosion is a thing, and the thing you described is part of it, and humans can kinda do it, and it helps quite a lot to have more raw cognitive horsepower...
I guess I’m not sure we’re disagreeing about much here, except that
I don’t know why you’re putting some important transition around 6 SDs. I expect that many capabilities will have shitty precursors in people with less native horsepower; I also expect some capabilities will basically not have such precursors, and so will be “transitions”; I just expect there to be enough such things that you wouldn’t see some major transition at one point. I do think there’s an important different between 5.5 SD and 7.5 SD, which is that now you’ve created a human who’s probably smarter than any human who’s ever lived, so you’ve gone from 0 to 1 on some difficult thoughts; but I don’t think that’s special about this range, it would happen at any range.
I think that adding more 6 SD or 7 SD is really important, but you maybe don’t as much? Not sure what you think.
First tbc, I’m always talking about thinkoompf, not just what’s measured by IQ tests but also sanity and even drive.
Idk I’m not at all sure about that but it seems to me like Nate and Eliezer might be a decent chunck more competent than all the other people I’m aware of. So maybe for the current era (by which I mostly mean “after the sequences were published”) it’s like 1 Person (Nate) per decade-or-a-bit-more who becomes really competent, which is very roughly +6std.(EDIT: Retracted because evidence too shaky. It still seems to me like the heavytail of intelligence gets very far very quickly though.)Like I’d guess before the sequences and without having the strong motivator of needing to save humanity the transition might rather have been +6.4std -- +6.8std. Idk. Though tbc I don’t really expect to be like “yeah maybe from 6.3std it enters a faster improvement curve which is then not changing that much” but more like the curve just getting steeper and steeper very fast without there being a visible kink.
I feel like if we now created someone with +6.3std the person would already become smarter than any person who ever lived because there are certain advantages of being born now which would help a lot for getting up to speed (e.g. the sequences, the Internet).