You’re saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don’t apply to them, because of “out of distribution” reasons. It seems to me that “out of distribution” here stands as a synonym of magic.
You’re saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
It seems to me that lie detection technology makes the scenario you’re worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
It’s worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it’s something that makes such accidents less, not more, likely.
My arguments end here. From this point on, I just want to talk about… smell. Because I smell anxiety.
Your framing isn’t “here is what I think is most likely the truth”. Your framing is “here is something potentially very dangerous that we don’t know and can’t possibly ever really know”.
Also, you explicitly, “secretly” ask for downvotes. Why? Is something terrible going to happen if people read this? It’s just a blogpost. No, it’s not going to accidentally push all of history off course down into a chasm.
Asking for downvotes also happens to be a good preemptive explanation of negative reception. Just to be clear, I downvoted not because I was asked to. I downvoted because of poor epistemic standards.
Do note that I’m aware that very limited information is available to me. I don’t know anything about you. I’m just trying to make sense of the little I see, and the little I see strongly pattern matches with anxiety. This is not any sort of an argument, of course, and there isn’t necessarily anything wrong with that, but I feel it’s still worth bringing up.
You’re saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don’t apply to them, because of “out of distribution” reasons. It seems to me that “out of distribution” here stands as a synonym of magic.
I think that “hypercompetent” was a poor choice of words on my part, since the crux of the post is that it’s difficult to evaluate the competence of opaque systems.
You’re saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
It’s actually the other way around; existing as an inner regime means surviving many years of evolutionary pressure from being targeted by all the rich, powerful, and high IQ people in any civilization or sphere of influence (and the model describes separate inner regimes in the US, China, and Russia which are in conflict, not a single force controlling all of civilization). That is an extremely wide variety of evolutionary pressure (which can shape people’s development), because any large country has an extremely diverse variety of rich, powerful, and/or high-IQ people.
It seems to me that lie detection technology makes the scenario you’re worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
The elites I’m describing are extremely in-tune with the idea that it’s worthwhile for foreign intelligence agencies to shape information warfare policies to heavily prioritize targeting members of the inner regime. Therefore, it’s worthwhile for themselves to cut themselves off of popular media entirely, and be extremely skeptical of anything that unprotected people believe.
It’s worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it’s something that makes such accidents less, not more, likely.
I definitely think that molochian races-to-the-bottom are a big element here. I hope that the people in charge aren’t all nihilistic moral relativists, even though that seems like the kind of thing that would happen.
My arguments end here. From this point on, I just want to talk about… smell. Because I smell anxiety.
You definitely sensed/smelled real anxiety! The more real exposure people get to dangerous forces like intelligence agencies, the more they realize that it makes sense to be scared. I definitely think that the prospect is scary of EA/LW of stepping on the toes of powerful people, and being destroyed or damaged as a result.
Is something terrible going to happen if people read this? It’s just a blogpost. No, it’s not going to accidentally push all of history off course down into a chasm.
Specific strings of text in specific places can absolutely push all of history off course down into a chasm!
As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you). -Howie Lempel, EAforum, Regulatory Inquiry into Effective Ventures Foundation UK
You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
You believe that this policy is a better adaptation to this environment than what anyone else could come up with
These networks have adapted by being so extremely secretive that it’s virtually impossible to know anything about them
You happen to know that these networks have certain (self-perceived) interests related to AI
You happen to believe that these networks are dangerous forces and it makes sense to be scared
This image that you have of these networks leads to anxiety
Anxiety leads to you choosing and promoting a strategy of self-deterrence
Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf
Given the above premises (which, for the record, I don’t share), you have to conclude that there’s a reasonable chance that your own theory is an active information battleground.
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because
1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy
2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I’m arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).
You’re saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don’t apply to them, because of “out of distribution” reasons. It seems to me that “out of distribution” here stands as a synonym of magic.
You’re saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
It seems to me that lie detection technology makes the scenario you’re worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
It’s worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it’s something that makes such accidents less, not more, likely.
My arguments end here. From this point on, I just want to talk about… smell. Because I smell anxiety.
Your framing isn’t “here is what I think is most likely the truth”. Your framing is “here is something potentially very dangerous that we don’t know and can’t possibly ever really know”.
Also, you explicitly, “secretly” ask for downvotes. Why? Is something terrible going to happen if people read this? It’s just a blogpost. No, it’s not going to accidentally push all of history off course down into a chasm.
Asking for downvotes also happens to be a good preemptive explanation of negative reception. Just to be clear, I downvoted not because I was asked to. I downvoted because of poor epistemic standards.
Do note that I’m aware that very limited information is available to me. I don’t know anything about you. I’m just trying to make sense of the little I see, and the little I see strongly pattern matches with anxiety. This is not any sort of an argument, of course, and there isn’t necessarily anything wrong with that, but I feel it’s still worth bringing up.
I think that “hypercompetent” was a poor choice of words on my part, since the crux of the post is that it’s difficult to evaluate the competence of opaque systems.
It’s actually the other way around; existing as an inner regime means surviving many years of evolutionary pressure from being targeted by all the rich, powerful, and high IQ people in any civilization or sphere of influence (and the model describes separate inner regimes in the US, China, and Russia which are in conflict, not a single force controlling all of civilization). That is an extremely wide variety of evolutionary pressure (which can shape people’s development), because any large country has an extremely diverse variety of rich, powerful, and/or high-IQ people.
The elites I’m describing are extremely in-tune with the idea that it’s worthwhile for foreign intelligence agencies to shape information warfare policies to heavily prioritize targeting members of the inner regime. Therefore, it’s worthwhile for themselves to cut themselves off of popular media entirely, and be extremely skeptical of anything that unprotected people believe.
I definitely think that molochian races-to-the-bottom are a big element here. I hope that the people in charge aren’t all nihilistic moral relativists, even though that seems like the kind of thing that would happen.
You definitely sensed/smelled real anxiety! The more real exposure people get to dangerous forces like intelligence agencies, the more they realize that it makes sense to be scared. I definitely think that the prospect is scary of EA/LW of stepping on the toes of powerful people, and being destroyed or damaged as a result.
Specific strings of text in specific places can absolutely push all of history off course down into a chasm!
As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you).
-Howie Lempel, EAforum, Regulatory Inquiry into Effective Ventures Foundation UK
You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
You believe that this policy is a better adaptation to this environment than what anyone else could come up with
These networks have adapted by being so extremely secretive that it’s virtually impossible to know anything about them
You happen to know that these networks have certain (self-perceived) interests related to AI
You happen to believe that these networks are dangerous forces and it makes sense to be scared
This image that you have of these networks leads to anxiety
Anxiety leads to you choosing and promoting a strategy of self-deterrence
Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf
Given the above premises (which, for the record, I don’t share), you have to conclude that there’s a reasonable chance that your own theory is an active information battleground.
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because
1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy
2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I’m arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).