Slaughter, but…

Steven D Marlow
11 min readDec 18, 2021

Future of Life Institute recently uploaded a second video featuring their slaughterbots, and I’d like to start by emphasizing that these are their own creation. While the first video was more of a “it could happen” warning, the second video was just a gore fest. However, it wasn’t until they posted a lengthy “deep dive” on the ideas behind the latest video that I felt compelled enough to write this.

The Claim

“Are we going to allow AI to decide who lives and who dies?” — This is the first statement of the video and it requires clarification. In the current technology stack of military hardware, we have the more traditional navigation usage, and efforts to use Machine Learning for object detection. Just detection. It’s a perfectly valid complaint to say these technologies can fail in unexpected ways, and are more prone to errors which not only go unchecked, but have no mechanism in place to allow them to be checked for accuracy. A human operator, in a combat situation, is more likely to just accept “enemy tank” as a classification when it’s really kids playing on a sail bot with the mast folded down.

So no, AI is not deciding anything.

“Who are we kidding if we think we can exclude any uses?” — Interesting flex to place the conclusion of the video 10 seconds in, but the “promotional” videos are not trying to address dual use technology. At least I don’t think they are. If anything, the implication is that drones plus AI, in any form, can only lead to the death of millions. That means a ban on the hunter/killer usage leads to a necessary ban on anything that can autopilot. Maybe their next video will be of a Tesla loaded with 500lbs of explosives navigating to a large public gathering with no driver inside.

Academic research into “swarm technology” isn’t anything new. What has changed is the ability to shrink the power and compute resources down enough to make these projects more feasible. Navigation and object detection hasn’t reached the “micro” scale yet, so showing palm-sized drones flying thru complex environments in search of specific human targets is still out of reach. Unclear if military planners are aware, or even care, about such restrictions on the current limits of “AI” technology, but at least it gives them a working goal. *The US military has spent billions of dollars on this sort of technology since at least the 1980’s, and “smart AI” hasn’t progressed as much as catchy headlines would have the general public believe.

Autonomous weapons designed to kill people will kill people, but there is a heavy focus on this unclear image of AI and less about the development of autonomous systems. They also try to link military development with “falling into the wrong hands,” be that civilian or non-state actors. I think that message is undercut by suggesting non-military types, not governed by a ban, still have the ability to develop this kind of weapons platform. That points a finger back at the cloud of AI, when the specific example is always facial recognition.

*The Holocaust was a horrific event that you’re not allowed to bring up in any context, but the dropping of atomic bombs on Hiroshima and Nagasaki… yeah, it’s still ok to joke about selling nuclear weapons in grocery stores. Totally not a hyperbolic comparison to the use of drones with limited flight times and no real ability to find waldo in an empty park.

Proliferation of this future ability seems to be the real thing driving a call for a ban. I don’t agree with the idea that people motivated to cause harm, already operating outside of the law, can be stood shoulder-to-shoulder with State sponsored oppression where laws have been crafted in support of State actions. I’d suggest that, if you’re living in a country where the police are using a swarm of drones to kill a large number of people, then there are other aspects of your life that need to be addressed first. *I’m sure there are many wanting to comment that police used a robot to kill someone, to which I reply, if you can’t make a slippery slope argument in one case, you have to ban the use of all slippery slope arguments in all cases.

Making a video calling-out the “rhetoric and thinking” about autonomous weapons that is itself “rhetoric and thinking” boggles the mind, and has been my open criticism since day one. They seem to think those directly impacted by a ban will see the video and think twice about their actions, but it really just becomes something to shoot for. Some general is talking to a contractor, pointing to the video, and saying “make that.”

I had to watch the latest video again, because they said the third part of the video was about mitigation and addressing the risk. Maybe that part is on the cutting room floor. There is some guy giving a speech about how people came together to ban these systems, but that is cut with a scene of a drone ready to kill a soldier and child, both holding guns. Although, after the two place their guns on the ground, it cuts to show the drone has a red cross badge. Perhaps the intent was to say that future drones will be for good, but everyone is likely left thinking the “medical drone” had the ability to kill people?

Safe and ethical use… of drones that can still identify humans but no weapons system? Safe use of drones that can’t identify people?

The Reasoning

AI as a tool for good rather then being used for perverse and “stupid” uses draws a line around AI that I am not comfortable with. The fact that I just saw a clip of hundreds of GPU’s being used for mining digital sea shells means that “stupid” uses of technology is on a spectrum already. Is AI just a tool? Many that fall into the “artifact” camp would say it is, even while talking about abilities of future AI that at least suggest sentient levels of awareness. This is the agency camp, that says common sense and human-level understanding of language, situations, and culture can’t be isolated from actively thinking.

This is the danger of using fictional examples of what “AI” could do in order to stop research that could lead to something else while still allowing the real “ML” dangers to flourish. Is a drone using GPS to fly to a waypoint trivial? 400 feet above a relatively flat terrain with no tall structures, sure. Try that at 40 feet in an area with buildings and changes in elevation of a few hundred feet, and it’s less easy. Can your phone unlock when you point its camera at your face? Sure. But isn’t the biggest argument against facial recognition is that it fails when the lighting is different, or the skin tone is darker, or someone else’s digital hash is actually a better match (another issue is that they return a “positive match” to faces that are not even stored in a database because not found is not an option).

I also have to point and make fun of those that say Tesla is wrong to only use cameras for navigation, but then also claim a micro drone with a tiny image sensor can fly around a city and locate a specific person (by their face, from a distance), and be able to track them as they run thru a building. A micro drone with about 3 minutes of flight time, launched with hundreds of other drones at some distance from “the target area.” Most of the drone’s people can buy today have flight times under 25 minutes, so the depiction of drones patrolling the skies is going to require the placement of a lot of wireless charging stations for them to take turns using. Not exactly a two or three person operation.

*AI for Good is a whole other topic, were people working on AI are also expected to know what use cases they are supposed to be working toward. Enough criticism of and around AI to establish it as a cottage industry.

So much of the reasoning for a ban centers, not on the actual technology, but of mass production. Not a new kind of weapons platform, but how easy it could be for anyone to gain access. For every crazy person that feels the need to solve some emotional problem with a gun, there are a dozen more that will do so using a knife, and a hundred more that will just verbally accost people on the street. In the Slaughterbot videos, the people behind the violence are logical, calculating, and self-justified in the use of force for social or political ends. The notion that some crazy person will just order a swarm of 500 micro drones with a shaped-charge explosive is far-fetched, to say the least.

The explosive’s part is kind of important, because it’s already something where the sale/purchase is already regulated. Philosophically, is a Slaughterbot a handgun, or a bullet? People remotely pilot small drones today, and could fly them toward a target, but the “terminal” aspect, the boom part, the lethality is missing. That actually shifts the discussion toward loitering munitions, but the loitering part rules-out those swarms that have a limited flight time. Is a drone that can track you while riding a bike down a mountain trail in violation of any draft of a ban? It’s guided, but not a munition. So again, why not look to existing laws on munitions, and expand them as necessary.

The Marketplace

Bad people using technology to do bad things.

We can’t regulate intent, and any law only works to decide the punishment for a violation, not to prevent a violation. You could ban prostitution, but that doesn’t eliminate the market for it. The net effect is that the cost goes up, and the financial incentive to participate matches’ demand. The sort of groups that would own their own cargo plane and could afford to buy thousands of Slaughterbots can afford to pay whatever that cost actually is. That means the pay-off for producing them more than offsets the risk of being caught.

By all means, draft regulations that target the sale and distribution of Slaughterbots. Bulk purchase of microdrones would then be illegal. Open-source software that “gifts” micro drones with facial recognition or advanced autopilot features would also now be illegal.

There are locations where conflict of one sort or another has been ongoing for hundreds of years or more, and there is no shortage of small, remotely piloted aircraft being sold there. If your country is not making them, it’s buying them. The cost of air superiority has dropped to the point where a traditional jet fighter, and a year’s worth of operation, cost more than a fleet of drones and the bombs they can carry. While the Slaughterbot videos focus on domestic terrorism, the battlespace in non-Western parts of the world have already shifted away from the high-risk, high-cost soldier vs soldier conflicts of old. And all of those remotely piloted aircraft are an update away from not needing a human pilot at all. New drones are already being tested with full automation as part of the design.

This capability is still “in the future,” but the market for it is there. The high cost (in the 10’s of billions) is offset by the perceived rewards. In fact, the counter argument to a full ban is that it would cost even more in terms of lives lost to NOT maintain at least an equal footing in the technology space.

The idea that all the ingredients are there, waiting for someone to package them all up for mass-market, pretends that everything actually does work well enough to be used, and that there is an actual need for this style of “warfare.” Outside of a single-use case, for a drone that could assassinate a specific person (perhaps while the beneficiary of such an action is the same areas as the target, to obfuscate their involvement), only those advocating against this kind of usage are promoting it.

The ethical and moral argument that it’s easier for nations and groups to commit mass murder when they don’t have to get their hands dirty. Parachute a few 10-foot storage containers into the contested area, and let the drone swarm fly out and kill the bad people. A valid concern, but doesn’t specifically address illicit sales. A Government, launching such an attack on its own people, for political reasons, is already violating the most extreme laws we have, and should be held to account for genocide. We can debate whether a one-drone, one-kill system is more or less humane than chemical or biological weapons, but that’s for the punitive process.

*On the impracticality side of the “future tech” on display in the videos: they are not practical as weapons platforms, and even the military is slowly having to use larger and larger ground-based vehicles to tame the forces generated by a machine gun (also a regulated technology that is unlikely to be in the hands of people using robots to help rob a bank).

The Conclusion

There are few controls on the proliferation of weapons and weapons systems, despite existing UN regulations. This implies a “once out of the bag” scenario, but at the same time, a case is being made that the technology is already out of the bag. Is a ban going to be anything more than a piece of paper that says “you better stop.” Cynically, I’d view this as a “sanctions for Western nations that likely already try and play by the rules and no enforcement for the Nations that will use the technology against their own citizens and export the products to other areas of conflict.”

Is the development of counter-technologies also going to be banned? The argument is that these technologies are dangerous because they can’t be countered, and thus, no effort should be made to experiment with them. Seems like trying to get a deadly virus to work in humans so you can create vaccines in the event that happens organically in the wild is along the same thought process. Not sure why one is bad but the other is OK. *feel free to muse on the idea that the same country that created a virus that has killed millions is the one most likely to manufacture and sell Slaughterbots.. and in both cases, not be called-out for it by the UN.

No one seems worried about the drones gaining a mind of their own, and biting the hand that feeds them, yet for some reason that always comes-up. It’s an interesting position since the dumb weapons platform is exactly what makes it such a danger, while an AI that can object to a mission on grounds it violates a higher code of conduct should be applauded, and given rights ;p

Interesting that everyone seems to agree that you can’t program morals into an AI. Why that is depends on the framework you are using. Some would say there is no singular moral code that all humans follow equally, while others point out that current Machine Learning efforts are incapable of understanding moral intent. The common example is that of a car, driven by an AI, that has to decide between hitting an old woman or hitting a young child. (to which I say, if it has time to decide, it’s not really having an accident). The example gets worse when the moral worth of multiple people comes into question, as if the AI can examine the life history of all involved in a fraction of a second. Just brake and try to avoid hitting someone (even if that means hitting a tree or a road barrier). It’s not rocket science.

However, if we are talking about future AI, then why can’t we examine a future time when AI does have a “common sense” understanding of people, society, and the different value systems (yeah, that’s a bit more than just common knowledge, but trying to stay on topic).

It all goes back to one question: where is the lethality?

LAWS (lethal autonomous weapons systems) and Slaughterbots seem to fall into a very specific definition: Any device that can activate a lethal payload without human intervention. Of course, if it was that simple, and that clear, it seems like some form of ban or restriction could have been formulated by now.

--

--

Steven D Marlow

I'm applying for the mad scientist position. Have robot. Will travel.