Attacks by hostile governments and criminal networks on civilian and Defense Department cyberspace assets are constant threats. As artificial intelligence grows in cyberspace and as it matures to enterprise scale, it, too, will become a target, said the director of the Joint Artificial Intelligence Center.
The first aspect of cyber defense of AI starts with the networks, Marine Corps Lt. Gen. Michael S. Groen said today during a virtual fireside chat at the Billington CyberSecurity Summit.
“The department is undergoing a little bit of a mind shift on networks and architecture. Our networks are a core piece of our warfighting architecture. Our networks are weapons, and, so, we have to treat them like weapons. We have to, we have to plan to protect them, make them resilient because everything that we’re going to do in an artificial intelligence or data-driven way will depend on the security [of] those networks,” he said.
As a result, the department has paid a lot of attention to network security and has done a really good job of shoring up things like zero-trust architecture, cloud security, the transport layer, switching and routing, Groen pointed out.
Adversarial AI has its own unique challenges like data poisoning, spoofing and deep fakes, and so on, he said.
“There’s special attention being paid to AI security within the department, so a lot of work is being done on testing vulnerabilities of algorithms and keeping a lid on spoofings, interruptions and poisonings,” he said.
Detecting threats and anomalous activity on the network at high operating tempos, such as in warfighting, is important, he said. AI can aid humans in monitoring the network for threats — a feat that’s sometimes beyond the physical and mental ability of humans.
There are a number of initiatives to employ AI for network protection, Groen said. The department is working closely with U.S. Cyber Command, network managers and others to make this happen.
Groen said he thinks that the ability of the department to employ AI for network protection will grow rapidly over the next several years.
A culture shift will be needed in order for operators and warfighters to embrace AI integration, he said.
Commanders who are going to use AI in decision making will need to understand its limits and understand what it will actually do for them. Operators have to know how to use AI and how to wield an algorithm like they would wield a weapon, he said.
The department isn’t going to flip a switch and suddenly turn on AI, he said. Instead, it has chosen an incremental approach, starting AI at a small scale for the most pressing problems and then finding other ways to use AI to make processes work better in an ethical framework.
There’s a lot of latent talent in youngsters joining the military, he said. That talent needs to be encouraged and developed so that they can find challenges and rewards in using cyber and AI to enable the warfighters.