If AI is an existential risk, it is a national security risk
If AI is a national security risk, it is a risk intelligence agencies would be interested in
If intelligence (in the spook sense) communities are interested in a risk, they are likely to develop a formal or informal research agenda into that risk
If research agenda’s in friendly AI exist that are not MIRI’s, MIRI may be interested in accessing said research agenda
Thought MIRI’s full technical research agenda is secret, it is plausible that they are not currently collaborating with intelligence agencies
MIRI may stand to benefit from access to AI research agendas from intelligence communities
If MIRI is unable to achieve collaborations on their own, LW activists may be able to assist them
Therefore, LW activists may have an interest in ‘penetrating’ intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
If this is in MIRI’s interest, it may be in a given rationalists interest
Rationalists with AI subject matter expertise may be interested in pursuing friendly AI research at the object level instead
Non subject matter experts may be interests in penetration with the intention of general access to an intelligence communities knowledge
Intelligence forces actively disqualify those with open curiosity about intelligence matters:
‘Viewing or downloading information from a secure system beyond the clearance subject’s need-to-know’ is cause for rejection of a security clearance in Australia
Therefore, penetrating intelligence communities for the purposes of creating greater transparency in the friendly AI research arena without AI subject matter expertise which may improve the likelihood of being assigned to AI safety specifically may be a poor use of one’s time.
Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of … limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren’t large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it’s worth doing; #3 might be correct, but hold onto that thought for a moment.
LW activists may have an interest in ‘penetrating’ intelligence agencies
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it’s probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I’m not claiming it’s impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies’ security in order to extract information about (U)FAI research “may be a poor use of one’s time”. I can’t disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one’s time.
I find it unlikely that US services have such programs without a person like Peter Thiel being aware of the existance of those programs.
LW activists may have an interest in ‘penetrating’ intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
You don’t get research collaboration by a strategy of treating other stakeholders in a hostile manner and thinking about penetrating them.
Intelligence Intelligence
If AI is an existential risk, it is a national security risk
If AI is a national security risk, it is a risk intelligence agencies would be interested in
If intelligence (in the spook sense) communities are interested in a risk, they are likely to develop a formal or informal research agenda into that risk
If research agenda’s in friendly AI exist that are not MIRI’s, MIRI may be interested in accessing said research agenda
Thought MIRI’s full technical research agenda is secret, it is plausible that they are not currently collaborating with intelligence agencies
MIRI may stand to benefit from access to AI research agendas from intelligence communities
If MIRI is unable to achieve collaborations on their own, LW activists may be able to assist them
Therefore, LW activists may have an interest in ‘penetrating’ intelligence agencies to extricate their technical research agendas around AI pursuant to greater research excellence and collaboration on AI safety and control problems.
If this is in MIRI’s interest, it may be in a given rationalists interest
Rationalists with AI subject matter expertise may be interested in pursuing friendly AI research at the object level instead
Non subject matter experts may be interests in penetration with the intention of general access to an intelligence communities knowledge
Intelligence forces actively disqualify those with open curiosity about intelligence matters:
Therefore, penetrating intelligence communities for the purposes of creating greater transparency in the friendly AI research arena without AI subject matter expertise which may improve the likelihood of being assigned to AI safety specifically may be a poor use of one’s time.
Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of … limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren’t large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it’s worth doing; #3 might be correct, but hold onto that thought for a moment.
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it’s probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I’m not claiming it’s impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies’ security in order to extract information about (U)FAI research “may be a poor use of one’s time”. I can’t disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one’s time.
I find it unlikely that US services have such programs without a person like Peter Thiel being aware of the existance of those programs.
You don’t get research collaboration by a strategy of treating other stakeholders in a hostile manner and thinking about penetrating them.