US military’s special task force will explore generative AI

Can AI models make military predictions? The DoD wants to find out.
a member of the air force staff demonstrates a virtual reality training system.
The military is increasingly utilizing virtual reality training systems and artificial intelligence in their development process. Air Force Staff Sgt Keith James / Air Education and Training Command Public Affairs

Share

Popular artificial intelligence applications like ChatGPT or DALL-E are growing more popular with the masses, and the Department of Defense is taking note. To get ahead of potential uses and risks of such tools, on August 10, the DoD announced the creation of a new task force analyze and possibly integrate generative artificial intelligence into current operations.

AI is an imprecise term, and the technologies that can make headlines about AI often do so as much for their flaws as for their potential utility. The Pentagon task force is an acknowledgement of the potential such tools hold, while giving the military some breathing room to understand what, exactly, it might find useful or threatening about such tools.

While Pentagon research into AI certainly carries implications about what that will ultimately mean for weapons, the heart of the matter is really about using it to process, understand, and draw certain predictions from its collection of data. Sometimes this data is flashy, like video footage recorded by drones of suspected insurgent meetings, or of hostile troop movements. However, a lot of the data collected by the military is exceptionally mundane, like maintenance logs for helicopters and trucks. 

Generative AI could, perhaps, be trained on datasets exclusive to the military, outputting results that suggest answers the military might be searching for. But the process might not be so simple. The AI tools of today are prone to errors, and such generative AI could also create misleading information that might get fed into downstream analyses, leading to confusion. The possibility and risk of AI error is likely one reason the military is taking a cautious approach to studying generative AI, rather than a full-throated embrace of the technology from the outset.

The study of generative AI will take place by the newly organized Task Force Lima, which will be led by the Chief Digital and Artificial Intelligence Office. CDAO was itself created in February 2022, out of an amalgamation of several other Pentagon offices into one designed to help the military better use data and AI.

“The DoD has an imperative to responsibly pursue the adoption of generative AI models while identifying proper protective measures and mitigating national security risks that may result from issues such as poorly managed training data,” said Craig Martell, the DoD Chief Digital and Artificial Intelligence Officer. “We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions.”

One such malicious possibility of generative AI is using it for misinformation. While some models of image generation leave somewhat obvious tells for modified photos, like people with an unusual number of extra fingers and teeth, many images are passable and even convincing at first glance. In March, an AI-generated image of Pope Francis in a Balenciaga Coat proved compelling to many people, even as its AI origin became known and reproducible. With a public figure like the Pope, it is easy to verify whether or not he was photographed wearing a hypebeast puffy jacket. When it comes to military matters, pictures captured by the military can be slow to declassify, and the veracity of a well-done fake could be hard to disprove. 

[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet]

Malicious use of AI-generated images and data is eye-catching—a nefarious act enabled using modern technology. Of at least as much consequence could be routine error. Dennis Kovtun, a summer fellow at open source analysis house Bellingcat, tested Google’s Bard AI and Microsoft’s Bing AI as chatbots that can give information about uploaded images. Kovtun attempted to see if AI could replicate the process by which an image is geolocated (where the composite total of details allow a human to pinpoint the photograph’s origin). 

“We found that while Bing mimics the strategies that open-source researchers use to geolocate images, it cannot successfully geolocate images on its own,” writes Kovtun. “Bard’s results are not much more impressive, but it seemed more cautious in its reasoning and less prone to AI ‘hallucinations’. Both required extensive prompting from the user before they could arrive at any halfway satisfactory geolocation.” 

These AI ‘hallucinations’ are when the AI incorporates incorrect information from its training data into the result. Introducing new and incorrect information can undermine any promised labor-saving utility of such a tool

“The future of defense is not just about adopting cutting-edge technologies, but doing so with foresight, responsibility, and a deep understanding of the broader implications for our nation,” said Deputy Secretary of Defense Kathleen Hicks in the announcement of the creation of Task Force Lima. 

The US military, as an organization, is especially wary of technological surprise, or the notion that a rival nation could develop a new and powerful tool without the US being prepared for it. While Hick emphasized the caution needed in developing generative AI for military use, Task Force Lima mission commander Xavier Lugo described the work as about implementation while managing risk.

“The Services and Combatant Commands are actively seeking to leverage the benefits and manage the risks of generative AI capabilities and [Language Learning Models] across multiple mission areas, including intelligence, operational planning, programmatic and business processes,” said Lugo. “By prioritizing efforts, reducing duplication, and providing enabling AI scaffolding, Task Force Lima will be able to shape the effective and responsible implementation of [Language Learning Models] throughout the DoD.”