US Military Global Information Dominance Experiments

Link post

Recently, the US military gave a briefing on what I take to be the current state of the art capabilities for using ML for intelligence gathering. Some quotes worth highlighting follow, with some notes and impressions at the bottom.

The Global Information Dominance Experiment that we’re going to talk about, and more specifically, the recent Global Information Dominance Experiment 3 was a cross-command event seeking to leap forward our ability to maintain domain awareness, achieve information dominance and provide decision superiority in competition and crisis.

So the experiment—we won’t name nations but let’s just say it was focused on a peer competitor. This time in GIDE 3 we focused a lot on contested logistics, to give us a scenario where maybe a line of communication such as the Panama Canal may be challenged. It enabled us to rapidly collaborate amongst all 11 combatant commands and across the department to see that data and information. We’re taking sensors from around the globe, not only military sensors but commercially available information, and utilizing that for domain awareness.

Specific to your question about artificial intelligence and what I call information dominance, we would take artificial intelligence and use machine learning to take a look and assess, for example, the average number of cars in a parking lot that may be there in a specific location to a competitor or a threat. And we monitor that over a period of time.

The machine learning and the artificial intelligence can detect changes in that and we can set parameters where it will trip an alert to give you the awareness to go take another sensor such as GEOINT on-satellite capability to take a closer look at what might be ongoing in a specific location.

What we’ve seen is the ability to get way further what I call left, left of being reactive to actually being proactive. And I’m talking not minutes and hours, I’m talking days.

The ability to see days in advance creates decision space. Decision space for me as an operational commander to potentially posture forces to create deterrence options to provide that to the secretary or even the president. To use messaging, the information space to create deterrence options and messaging and if required to get further ahead and posture ourselves for defeat.

First of all, all 11 combatant commands are using the exactly same environment, single pane of glass. There’s no difference between what United States Northern Command NORAD have than SOUTHCOM, or SPACECOM, or anybody. We’re all collaborating in the same information space using the same exact capabilities.

This data and information—we’re not creating new capabilities to go get data and information. This information exists from today’s satellites, today’s radar, today’s undersea capabilities, today’s cyber, today’s intel capabilities.

The data exists. What we’re doing is making that data available, making that data available and shared into a cloud where machine learning and artificial intelligence look at it. And they process it really quickly and provide it to decision-makers, which I call decision superiority.

This gives us days of advanced warning and ability to react. Where, in the past, we may not have put eyes on with an analyst of a GEOINT satellite image, now we are doing that within minutes or near real-time. That’s the primary difference that I’m talking about.

Q: Thank you for taking my question. So I’m wondering if there are any concerns about self-fulfilling prophecy, where you get into a loop and there’s some assumptions made. Is there a thought about that?

Yes. So, the first thing I would tell you that humans still make all the decisions in what I’m talking about. We don’t have any machines making decisions. Certainly, machines can provide options.

Today, we end up in a reactive environment because we’re late with the data and information. And so all too often we end up reacting to a competitors move. And in this case, it actually allows us to create deterrence, which creates stability by having awareness sooner of what they’re actually doing.

So for example, in the intelligence communities, you know, historically, we’ve taken that intelligence. We’ve allowed an analyst to pore over that for days, sometimes, before we made that intelligence available. We may have to make that intelligence available sooner in the future by sharing the raw data, the real-time data, and allowing machines to look at that data, things that today, analyst may do. The machine can take a look and tell you exactly how many cars are in a parking lot, or how many airplanes are parked on a ramp, or if the submarine’s getting ready to leave, or if a missile’s going to launch. Where that may have taken days before, or hours, today, it can take seconds or less than minutes. Those are policy issues that we’ll have to sort through and build trust and confidence.

Some thoughts

  • The US military seems way behind industry here. In particular, the “cars in the parking lot” example is taken from something that hedge-funds/​startups have been doing since at least 2010 (source)

  • The sentence “all 11 combatant commands are using the exactly same environment, single pane of glass. There’s no difference between what United States Northern Command NORAD have than SOUTHCOM, or SPACECOM, or anybody” is interesting, because it implies that CENTCOM, the command for forces in Afghanistan, also had access to these experimental capabilities.

  • The commander giving the press briefing emphasizes that humans are still making the decisions, but also emphasizes that these are now much faster. This could have interesting effects, which I imagine would be mostly negative (faster reaction times might lead to faster escalation and less time to cool-off or to integrate contradictory information.)

  • I was fairly impressed that a reporter brought up the possibility of self-fulfilling prophesies.

  • It would seem beneficial to me to start thinking about potential safety problems which these kinds of capabilities could have. In particular, without having given it much thought, if one expects that the successors to these kinds of programs could become fairly powerful, it would seem like a good idea to try to embed a safety team at e.g., NORAD or whichever organization was in charge of this project, in a similar way to how e.g., OpenAI or DeepMind have safety teams today to deal with current problems and theorize/​anticipate about new ones, or how I imagine that stock exchanges have someone responsible for avoiding, e.g., flash crashes. Someone who could push for this would be Jason Matheny.

    • By default, I am not going to personally push for this, I’m just putting the idea out there because it seems like a very specific ask which could be beneficial. There might also be better ideas in the broader space of “collaborate with NORAD on current and future AI safety”

See also: AI race considerations in a report by the U.S. House Committee on Armed Services

No comments.