*Note: This essay was originally published on the United States Naval Institute’s Proceedings commentary & analysis site.
Sure, you love your smartphone. But what if that sleek digital device served as your company commander—would you follow it? Would you die for a computer program’s decision? Can valor be “automated”? And in advanced societies with fewer and fewer babies, just where will soldiers come from?
These are some of the profound questions raised in August Cole’s short story, “Automated Valor,” from the May 2018 Proceedings. Cole, who co-wrote Ghost Fleet, has a knack for identifying important issues that will come in future conflicts. For that reason, his stuff is worth more than a read—it’s worth an after-action review (AAR).
According to Army Doctrinal Publication 1-02, an AAR is a “guided analysis of an organization’s performance” undertaken “during and at the conclusion” of some event for improvement going forward. The AAR’s strength is its flexibility, an ability to adapt to the conditions of nearly any endeavor. In this case, the AAR can be modified and reprogrammed to find insights in “Automated Valor” (just as I have done previously for another of Cole’s short stories, “UNDERBELLY”).
In the interest of full disclosure, I note that Cole wrote a chapter in a book I coedited, Strategy Strikes Back: How Star Wars Explains Modern Military Conflict. But because this essay is an AAR meant to extract lessons from the story—and not literary criticism of the work—our prior relationship does not present much potential for conflict of interest.
From the beginning of “Automated Valor,” the reader finds him- or herself jammed into a high-speed fighting vehicle belonging to the British Commonwealth Legion’s Third Medium Combat Team (part of the British Army’s Second Special Purpose Brigade Combat Team) in the Djibouti Free Trade Zone, April 2039. Their mission? To “prevent a quasicivilian Chinese resupply convoy from reaching the People’s Liberation Army infantry and PLA Marines occupying the port district in Djibouti City.”
Some aspects of this seem like pretty straightforward extrapolations from today’s trends: Members of the team see their surroundings in granular, digital detail; they “plug in for resupply” using 3-D printing; and they fight in an urban area by adopting a subterranean defensive position.
Three humans are on the team. But the fourth member, Churchill, is an “Active Combat Entity,” the team’s “synthetic leader” who appears as an advanced artificial intelligence (AI) program designed to “individualize its command style for the crew” in ways that “further reinforced the unique human-machine ties.” In the story, the human members of the team are so well adjusted to their nonhuman leadership that they take it in stride when the AI reroutes them with a new order.
This aspect of the story is hard to swallow. We know the world is working toward greater automation, especially in the relatively uncluttered domains of the air and at sea—but it is difficult to wrap one’s head around the idea that ground combat would be so “auto”-impacted in a single generation. But maybe.
Where the bone really gets lodged in the throat, however, is not the AI as team member but as leader. Soldiers often follow someone through a leader’s presence and charisma. A personal connection develops—a human one—that seems entirely antithetical to the notion that an AI could replicate or replace that bond. More generally, people want to follow heroes, not the big-screen Marvel-Infinity War kind, but in an “I-want-to-be-like-the-captain-someday” sense.
The prominent mythologist, Joseph Campbell, on whose work George Lucas based much of Star Wars, has said, “A hero is someone who has given his or her life to something bigger than oneself.”
At least by this definition, with no life to lose, an AI cannot be a hero. With no possible path to heroism, would soldiers follow an AI? Perhaps. If the only thing that matters is judgment that is better than a human could provide, then it is conceivable. But the effects of the loss of human-led combat leadership are hard to foretell.
The question plays only a minor role in Cole’s story, because the British team is more concerned with combat than civilians—but one other reason to be skeptical of nonhuman leadership is that, ultimately, all war is an attempt to impose some new or different social reality in human affairs. It seems fair to assume that those being imposed upon, those against whom force is being used, would be much less likely to accept any way of life dictated by a computer program, even a really great one. Thomas P. M. Barnett makes the point that if “there’s something truly valuable to contest, a country’s manned forces still need to occupy and control it; otherwise, nothing is achieved.” Barnett concludes, “Wake me up when drones can set up local government elections in Afghanistan or reconfigure Mali’s judicial system.”
Setting this skepticism aside, “Automated Valor” makes the entirely plausible case that militaries and nations will use AI to be more efficient and effective in many aspects of strategic affairs. Yet, the story does not concern itself solely with combat outputs—it also considers the inputs.
At the social and strategic level—where will future societies get grunts? Every major U.S. ally today is in demographic decline, some in absolute freefall (South Korea and Japan, for example), and realistic humanoid robots are not yet on the horizon. The future will continue to require that people sign on the dotted line and join the military.
“Automated Valor” describes one way of solving this slow-moving, looming crisis: the British “e-Commonwealth.” In the story, Great Britain has 64 million citizens, and another 27 million are members of the “e-Commonwealth,” an alternate path to citizenship for those born beyond British borders. Some 32,000 citizenship-seekers do so not by paying extra taxes, but through military service (a “three-year contract with the British Commonwealth Legion”). This gives the Legion a “highly motivated, diverse, and internationally savvy fighting force that could be used in ways and places regular forces could not,” while those chosen gain benefits such as “unrestricted home market access, e-health services,” and “free online advanced education at top universities.”
Recruiting soldiers from abroad is an old idea, and even the name Cole chose to designate the unit (“Legion”) echoes the famous desert rogues that have long filled the ranks of the French Foreign Legion. But the idea is presented as a provocative blend of old and new—the great upsides of digital advances in commerce, health, and education that can provide long-distance benefits to prospective-soldier-recruits, along with the reduced downside that would come from a successful fix to the real problem of filling future military units. It makes this fictional story seem, well, potentially factual. This is an approach the United States (and other militaries and societies) ought to consider.
Which is really the point here, to consider, to raise the right questions, to the left of the next real war. Some of the trends Cole mentions in passing are essentially certain (i.e. demographics and technological change), but our policy choices are not automated—yet.
(But maybe they should be.)