That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
What are you referring to with biological intelligence enhancement?
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be?
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
Why wouldn’t we have evolved to have the key trigger naturally sometimes?
That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
We do; they’re active in childhood. I think.