One year ago, I left the cushy confines of the Washington, D.C., political world and arrived at Fort Benning, Georgia, to begin service to my nation as a United States Army officer. While in D.C., I worked and spoke on a variety of national security and defense policy issues, and now I was going to participate in the gritty groundwork of securing our nation.
After a year moving around bases and seeing our armed forces in action, I’ve come to believe even more that technology is increasingly not only an essential military driver but also perhaps the most important one for our national security.
War is no longer fought with grand armies facing off on some forsaken battlefield, just as security is no longer in troop numbers or fortresses. In the latter half of the 20th century, nuclear weaponry was seen as the holy grail of protecting a state’s existence, a technological ward against any and all due to its destructive capability.
But in the 21st century, military dominance and security can be achieved in the cyberspace, hardware, artificial intelligence, and aeronautical realms. Unmanned drones have permitted everything from surgical strikes deep in enemy territory to reconnaissance to reducing the need for frontal assaults at all. Cyberspace, due to the reliance of much of the world economy, society, and government operations on the internet, is increasingly a “battlefield” in which coders, hackers, and more compete for superiority, resources, and control.
The Defense Advanced Research Projects Agency is at the forefront of immense frontier developments in artificial intelligence for automating vast tasks previously undertaken — and risked — by boots on the ground. Low Earth orbits — and beyond — are increasingly sensitive and need protection due to the prevalence of satellites and other uses of space, as the recently revitalized U.S. Space Command and potential U.S. Space Force may address.
Similar to the way the development of firearms changed the strategies and demands of war from those of armor-clad knights and spear-wielding infantry, our modern security requirements significantly change how we need to prepare and adapt to properly protect the United States and its interests in the technology age.
“If Mark Zuckerberg decided that he wants to serve his county in the military, we could probably make him an E-4 at cyber command,” said a former Pentagon personnel chief in 2016. But in a follow-up in 2018, he said, “There was no way to have him come in with the stature his professional abilities demand.” That recently changed, as the military announced that it will potentially offer direct commissions even up to the rank of colonel for the areas deemed especially in need of our country’s defense, particularly in the cyberspace and technology.
As the 21st century continues, we are likely to see the move towards not just a technology-supplemented but also a technology-based military security strategy. A future of wars fought almost entirely by machines, controlled and overseen by military persons back at headquarters (or not, in the case of self-automated machines), is not guaranteed. Yet with current trends, it might not be too far off.
A bigger question is how our military and society will adapt as these changes become increasingly prevalent. I heard in Washington, D.C., how concerned policymakers are over the increasing security implications of rapid technological developments, many of which are well outside the realm of government control but still have immense impact.
The 20th century was the age of technology in the sense of human-controlled and operated machines. In the 21st century, machines will increasingly operate on their own, designed and programmed and set loose. The results remain uncertain.
Erich Reimer is a Captain in the United States Army. All views expressed are only those of the author and not those of the Department of Defense.