Elon Musk is one of the world's most influential, wealthy, and intelligent people. He's the leader behind Tesla Motors and Space-X, two companies that I find incredibly ambitious and worth paying attention to. He's also a proponent of using technology to the betterment of our culture and society, which is something that I can usually stand behind and applaud. Over the last few years he's been banging the drum that Artificial Intelligence needs to be controlled and kept non-sentient in order to ensure our species has a future. Recently he and Google’s Mustafa Suleyman joined forces to lead a group of 116 specialists from across 26 countries to have the UN ban autonomous weapons in much the same way chemical and biological weapons are banned from use. While I agree with the idea in principle, this may not be the most logical solution to the problem of nations sending "armies" of robots to war. Until nations have a means to destroy or disable remote or autonomous vehicles en masse through EMP or similar weapons, it only makes sense that militaries not only possess "killer robots", but continue working on improving the software that operates them.
The wars fought in my lifetime have been mostly commercial endeavours, with parties battling for resources at the expense of human life and toothless UN condemnations. One of the biggest issues facing nations and megalomaniacs hellbent on annexation of territory is the undeniable cost that comes with supplying the people who are pillaging, occupying, and otherwise conquering space on a map. Armies, air forces, and navies cost money. Lots of it. But if one could instead employ machines to clear out territories, operational costs of skirmishes go way down. More than this, training is essentially reduced to zero as everything a machine would need to know before going to battle could be stored in memory within seconds. Gone are the days, weeks, or months of training to learn the art of war. "Loyal" machines could be built by the thousands with each passing day allowing a hostile force to overwhelm the defences of all but their most powerful adversaries. A lot of people think North Korea with ICBMs and miniaturized nuclear warheads is a problem. A legion of drones carrying several thousand rounds of ammunition, mini-missiles, and a kamikaze sensibility to use every last bullet before the batteries run down would be just as terrifying. These could be built in secret and deployed under the radar, catching nation states completely unaware until the death toll was in the thousands, leaving infrastructure in place for the encroaching power to occupy territory without having to rebuild roads, power, and telecommunications lines along the way.
Wars of the future will be absolutely terrifying, and humans are simply not enough to combat such a horrific sight as 50,000 drones flying like locusts into the heart of a city while firing indiscriminately at anything that moved.
Rather than prohibit killer robots, we should enlist the best people to build them while following Asimov's three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These "laws" are hardly perfect, but they give us a really good place to start. If a hostile nation were to send drones into battle, either to conquer or as a terror operation, domestic drones would stand the best chance to provide the best line of defence until the military proper, staffed with humans, arrived. Domestic devices could provide cover while populations escaped. Domestic devices could drastically reduce the number of hostile robots targeting the civilians and key target areas. Domestic devices could buy time. UN laws alone are simply insufficient to prevent someone who cares little about the rule of law from exercising might.
People who know me will understand I don't propose the creation of machines that kill lightly. I'm hardly a pacifist, but I have a strong distaste of mechanical armies fighting our battles, as it cheapens the entire act of war. There is zero substantial cost if a government sends a million machines to fight a battle versus a million people. For this reason, it only makes sense that nations defend themselves from phalanxes of artificial troops. When better countermeasures such as targetable EMPs1 and other intelligent mechanisms are in place, then nations can look at fully outlawing the use of machines in war. A drone is not the same as a chemical weapon. A drone is not the same as a biological or nuclear weapon. A drone, autonomous or otherwise is a completely different type of threat, and one that should be met with whatever force is necessary until better defences are available.
At the end of the day it's not AI that we should be outlawing, strictly regulating, or blindly fearing; it's our fellow humans.
- Electro-Magnet Pulse ↩