Accountable Intelligence

A recent editorial on The Guardian outlined some reasons why people may want to hold organisations accountable for any consequences that come about as a result of using algorithms to make decisions. Using Amazon's recently scrapped resume parsing system as a focal point for why "all algorithms are inherently bad", the argument was made that the biases and unconscious tendencies of the people who make the sophisticated tools that so many companies rely on seep into the code and propagate the discriminatory practices that have affected huge swaths of the human race for thousands of years. While I can certainly get behind the idea that people should be held responsible for the things they create to a point, embarking on crusades to blame software for the blind obedience of the people who use it seems like an ineffective use of time which would ultimately result in yet more unintended consequences.

DJI Spark Drone Alpine White

Imagine, if you will, a person creates a piece of software that can be put on a quadrocopter drone to autonomously ferry small packages of medical supplies and food rations up a mountain to stranded people in need of help. The package is intended to provide the basics for first aid, plus enough calories to provide energy for the several hours people may need to wait before a rescue team can arrive. Because the drone would be flying out of range of a radio control, the system is made "intelligent" enough to use the camera to avoid colliding with objects, make course corrections due to atmospheric conditions, and land smoothly near the stranded climbers in a safe place where they can get to the supplies.

The software works great and helps a lot of people in its first year of operation. Other rescue teams around the world are interested in doing something similar, so the person who created the system makes it freely available online because they want to help as many people as possible and, given that many decent, short-range programmable drones can be bought for less than $2000, sees this as a genuine act of selflessness and compassion. What a wonderful gift for the world!

But then someone decides to use that software to deliver a package of C4 explosives to a very specific, very crowded location. A DJI Inspire 1 Pro Quadcopter can travel as far as 2km when the weather's nice, which is certainly far enough for a terrorist to be when instigating fear in a population. At 2km, there's a good chance the drone will not be traced back to them easily. At 2km, there's a good amount of time to drive in the opposite direction to put even more distance between them and the incident they've planned with what is essentially a low-price guided missile. After the attack and the resulting media frenzy, a number of copycat incidents take place around the globe using the very same software that was designed to help people.

In this hypothetical situation, would the software developer who freely shared their system with the world be considered even remotely responsible for any death or destruction that came about as a result of terrorists using the tool to successfully deliver a payload of explosives? If the answer is yes, then so is the drone maker, and the C4 manufacturer, and maybe even the maker of the computer that was used to load the software onto the drone, including the operating system maker for that computer. One could also logically assume that car manufacturers are responsible for any deaths caused by idiots driving into a crowd of people.

If the answer is no, then how can we hold other machine learning or artificial intelligence developers accountable for unintended consequences that came about when the objective of the software was ultimately good? A company like Amazon likely receives thousands of job applications a day. How can any team of human resources professionals work through such a deluge of skilled people to fill positions without some means to separate the exceptional from the good?

Organisations and developers do need to be considerate of how they develop their tools and be willing to make rapid changes when anti-patterns and biases are discovered. Amazon did this when they shut down the ineffective resume-parsing system. This was the result of the people using the system seeing the problem, identifying that it was wrong, and making a decision. The unnamed author of the editorial stated:

It is therefore essential that moral and legal responsibility be attached to the human parts of the system. […] We hold Facebook or Google responsible for the results of their algorithms. The example of Amazon shows that this principle must be more widely extended. AI is among us already, and the companies, the people, and the governments who use it must be accountable for the consequences.

What exactly did Amazon do wrong here besides not catch the problem sooner? Heck, while it's outside the scope of this post, when has anyone but a bunch of endlessly angry people on Twitter held Facebook or Google responsible for their algorithms? The recent "hacks" and abuses of their platforms were not the result of incomplete AI or ML mechanisms, and both of the companies continue to use their digital systems in ways that people will find immoral and/or illegal, depending on a person's cherry-picked interpretation of a law.

As people and organisations continue to amass large amounts of data, machine learning tools will be used in order to make sense it. Information will be gleaned — sometimes with hidden biases — and decisions will be made based on poor or incomplete knowledge of the obscured faults. Most people do try to do good things most of the time1. This means that, from a moral perspective, people will make the necessary changes to correct the problem as best as possible. It won't be perfect because people are not perfect.

Talk of legal accountability has no place here unless people were actively harmed as a result of the software, like the people in war-torn nations where military drones identify targets via ML solutions and a human pushes a button to launch a missile. The worst that could have happened from Amazon's ineffective HR system is that a few really talented people got away and are now working somewhere else, providing value to a different company that appreciates the skills and knowledge their recent hire brings to the table.

Long rant aside, threats of moral shaming and legal action will do nothing to change the fundamental problems that we find in software, which is that our digital tools are very much a reflection of our flawed, organic selves. Bias is everywhere and comes in many shades. It's the role of an informed population to raise awareness when problems are found, perhaps even offer a solution (heaven forbid!), and follow through to ensure issues are resolved. This isn't just for software, but behaviour, perceptions, and personal actions, too.


  1. Perhaps I am naive, but I do believe this to be the case. Most of us are not inherently evil.