Here’s the thing: These so-called “killer robots” are seriously the least of our concerns. Such offensive technology is patently obvious, and researchers are free to decide whether they want to participate or not participate in such endeavors.
The wider challenge for the field is that all artificial intelligence research is equally applicable to offensive technologies as it is to improving the human condition. The entire research program around AI is to create new capabilities for computers to perceive, predict, decide and act without human intervention. For researchers, the best algorithms are idealized and generalizable, meaning that they should apply to any new subject with some tweaks and maybe more training data.
Practically, there is no way to prevent these newfound capabilities from entering offensive weapons. Even if the best researchers in the world refused to work on technologies that abetted offensive weapons, others could easily take these proven models “off the shelf” and apply them relatively straightforwardly to new applications. That’s not to say that battlefield applications don’t have their own challenges that need to be figured out, but developing core AI capabilities is the critical block in launching these sorts of applications.
AI is a particularly vexing problem of dual-use — the ability of a technology to be used for both positive applications and negative ones. A good example is nuclear theory, which can be used to massively improve human healthcare through magnetic resonance imagery and power our societies with nuclear power reactors, or it can be used in a bomb to kill hundreds of thousands.
AI is challenging because unlike, say, nuclear weapons, which require unique hardware that signals their development to other powers, AI has no such requirements. For all the talk of Tensor Processing Units, the key innovations in AI are mathematical and software in origin, before hardware performance optimization. We could build an autonomous killing drone today with a consumer-grade drone, a robotic gun trigger and computer vision algorithms downloaded from GitHub. It may not be perfect, but it would “work.” In this way, it is similar to bioweapons,which can similarly be built with standard lab equipment.
Other than outright stopping development of artificial intelligence capabilities entirely, this technology is going to get built, which means it is absolutely possible to build these weapons and launch them against adversaries.
In other words, AI researchers are going to kill people, whether they like it or not.
Given that context, the right mode for organizing isn’t to stop Google from working with the Pentagon, it is to encourage Google, which is among the most effective lobbying forces in Washington, to push for more international negotiations to ban these sorts of offensive weapons in the first place. Former Alphabet chairman Eric Schmidt chairs the Defense Innovation Board, and has a perfect perch from which to make these concerns known to the right policymakers. Such negotiations have been effective in limiting bioweapons, chemical warfare and weapons in outer space, even during the height of the Cold War. There is no reason to believe that success is out of reach.
That said, one challenge with this vision is competition from China. China has made autonomous warfare a priority, investing billions into the industry in pursuit of new tools to fight American military hegemony. Even if the U.S. and the world wanted to avoid these weapons, we may not have much of a choice. I, for one, would prefer to see the world’s largest dictatorship not acquire these weapons without any sort of countermeasure from the democratized world.
It’s important to note, though, that such fears about war and technology are hardly new. Computing power was at the heart of the “precision” bombing campaigns in Vietnam throughout the 1960s, and significant campus protests were focused on stopping newly founded computation centers from conducting their work. In many cases, classified research was banned from campus, and ROTC programs were similarly removed,only to be reinstated in recent years. The Pugwash conferences were conceived in the 1950s as a forum for scientists concerned about the global security implications of emerging technologies, namely nuclear energy.
These debates will continue, but we need to be aware that all AI developments will likely lead to better offensive weapons capabilities. Better to accept that reality today and work to protect the ethical norms of war than try to avoid it, only to discover that other adversaries have taken the AI lead — and international power with it.
Image Credits: CARL COURT/AFP / Getty Images