The US Army is developing AI models trained on data from real missions, with the goal of deploying a chatbot specifically for soldiers.
“We have all of these lessons learned from missions like the Ukraine-Russia War and Operation Epic Fury,” says Alex Miller, the Army’s chief technology officer, in an interview with WIRED. “There is a huge amount of knowledge available.”
Miller showed WIRED a prototype of the system, called Victor, that combines a Reddit-like forum with a chatbot called VictorBot to help troops surface useful information, like the best way to configure electromagnetic warfare systems for a particular mission. When a soldier asks how to set up their hardware, VictorBot generates an answer and points to relevant posts and comments from other service members. “Electromagnetic warfare is such a hard topic,” Miller says. Victor, he adds, “can generate a response and cite all of the lessons learned from [different] units.”
The Pentagon has ramped up its efforts to incorporate AI into military systems over the past two years, but Victor is a rare example of the military building AI for itself. The project shows how keen the US military is to master the nuts and bolts of AI—and how the technology may be poised to transform daily life for many troops.
Miller says the Army is working with a third-party vendor that will run and fine-tune the AI models that power Victor. He declined to name the specific firm because the contract has not yet been announced. He says that more than 500 repositories of data have been fed into the system, and notes that Victor will seek to reduce the potential for errors in a similar way to commercial chatbots, by citing factual sources.
Efforts to integrate AI into military systems accelerated following the introduction of ChatGPT in 2022. More recently, Anthropic’s technology reportedly played a prominent role in planning operations in Iran through a system powered by Palantir.
As these systems have grown more capable, however, disagreements have emerged regarding how AI should be deployed. Earlier this year, Anthropic went head-to-head with the Pentagon, arguing that its technology should not be used to power autonomous weapons or surveil American citizens.
Same Mistakes
Victor is being developed within the Combined Arms Command (CAC). Lieutenant Colonel Jon Nielsen, who oversees the CAC’s work on Victor, says it’s not uncommon for different brigades to make the same mistakes on different missions. The goal with Victor, he adds, is to eventually make the system multimodal so that soldiers can feed in imagery or video and get insights. “Victor will be one of the only sources with access to authoritative Army information,” Nielsen says.
Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology and a former policy adviser for the Pentagon, says project Victor highlights the potential for AI to automate a lot of non-sexy back-office tasks within the Department of Defense. Late last year, the department introduced GenAI.mil, an initiative aimed at spurring greater AI adoption among DOD employees.
If Victor proves a success, however, Kahn believes the Army could eventually hire a big AI company to advance the system’s capabilities. “The big labs are obviously going to have a comparative advantage” in terms of building and deploying cutting-edge AI, she says.
Intel Failures
AI could introduce new kinds of problems for militaries, says Paul Scharre, executive president of the Center for New American Security and a former US Army Ranger. Scharre says that the tendency for AI models to be sycophantic could be particularly problematic. “I could envision situations where that would be particularly worrisome in a context of intelligence analysis,” he explains.
Scharre adds that AI adoption could become more complicated as systems advance from chatbots to agents capable of using software and computer networks. “Agentic AI raises this whole new set of challenges around security,” he notes.







