Isn’t the college student example an example of 1 and 2? I’m thinking of e.g. students who become convinced of classical utilitarianism and then join some Effective Altruist club etc.
I don’t think so, not usually. What happens after they join the EA club? My observations are more consistent with people optimizing (or sometimes performing to appear as though they’re optimizing) through a fairly narrow set of channels. I mean, humans are in a weird liminal state, where we’re just smart enough to have some vague idea that we ought to be able to learn to think better, but not smart and focused enough to get very far with learning to think better. More obviously, there’s anti-interest in biological intelligence enhancement, rather than interest.
After people join EA they generally tend to start applying the optimizer’s mindset to more things than they previously did, in my experience, and also tend to apply optimization towards altruistic impact in a bunch of places that previously they were optimizing for e.g. status or money or whatever.
What are you referring to with biological intelligence enhancement? Do you mean nootropics, or iterated embryo selection, or what?
That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
What are you referring to with biological intelligence enhancement?
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be?
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Isn’t the college student example an example of 1 and 2? I’m thinking of e.g. students who become convinced of classical utilitarianism and then join some Effective Altruist club etc.
I don’t think so, not usually. What happens after they join the EA club? My observations are more consistent with people optimizing (or sometimes performing to appear as though they’re optimizing) through a fairly narrow set of channels. I mean, humans are in a weird liminal state, where we’re just smart enough to have some vague idea that we ought to be able to learn to think better, but not smart and focused enough to get very far with learning to think better. More obviously, there’s anti-interest in biological intelligence enhancement, rather than interest.
After people join EA they generally tend to start applying the optimizer’s mindset to more things than they previously did, in my experience, and also tend to apply optimization towards altruistic impact in a bunch of places that previously they were optimizing for e.g. status or money or whatever.
What are you referring to with biological intelligence enhancement? Do you mean nootropics, or iterated embryo selection, or what?
That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
We do; they’re active in childhood. I think.