Who Defines Morality?

An article in a local NYTimes subsidiary recently published an article on Delphi, a system with the goal of determining the ethics of an action. This is an important area of research as we start to see more machines that operate semi-autonomously and need to determine whether a course of action is correct or not. Obvious examples of where ethical decision-making is necessary would be with self-driving vehicles. If a vehicle carrying two passengers is travelling down the road and an oncoming car swerves into the same lane, the vehicle will need to quickly determine the correct course of action. Does it slam into the oncoming vehicle, potentially killing the passengers in both cars? Does it slam on the brakes, offering the offending vehicle more time and space to correct its lane transgression? Does it rapidly move into another lane or onto a sidewalk? Each one of these decisions requires information that may not be available and each will also carry consequences. So, given the lack of time to collect additional data, the unforgiving laws of physics governing bodies in motion, and the immediate need to take action, which action would be the most correct?

Decisions are considered right or wrong based on the ethics of the observer, and ethics are incredibly subjective. While some may feel that severing the right hand off a man is justifiable because they stole a loaf of bread, others would find it reprehensible. Is a politician right to use their position of authority to the benefit of their friends or family? Is a doctor right to over-prescribe medications to help a terminally-ill patient not feel pain? Is a teacher right to teach their students a government-mandated concept or belief? Is a parent right to teach their children that people from a particular nation or genetic lineage are not worthy of respect? Is an individual right to kill a mosquito that has landed on their skin? Is an individual right to kill a mosquito that is on a table? Is an individual right to kill every mosquito on the planet with a genetically-engineered virus?

Most people will have an immediate answer to each of these questions and not one of them is a clear "Yes" or "No". Our sense of right and wrong has been built – and continues to be built – based on our lived experience, the circumstances we've faced, and the people who surround us.

There have been a lot of articles over this past decade on how various vision-based and decision-based systems are racist, sexist, or otherwise discriminatory. I generally disagree with the authors of these pieces because to be in ~ist, one has to possess prejudice. Software can certainly be written to enforce any biases the developer (or organisation) may have, but machines generally do not care where a person's ancestors may have lived, which sexual organs someone may have, or how they use them. Instead, I see these vision or decision-based systems as being ignorant; they lack a sufficient amount of information to reach a better solution. Generally when this sort of problem arises at the day job, I say "the system (or idea) is incomplete". My concern is that software that tries to determine the moral outcomes of a decision will be just as incomplete.

Who are the people that define right and wrong for Delphi and systems like it? Generally it will be academics, philosophers, and a handful of committee members with strongly-held beliefs. Essentially, very intelligent people with similar ideas who live in a very well-protected bubble outside of the wider society. While they will most certainly do their best to codify modern morality in a manner that a set of dynamic algorithms can understand, there will be a great deal that they cannot prepare simply because they suffer from the same problem that we all have; ignorance. We do not know what we do not know. And, to add a little more complexity into the problem, two people in the same society will have different ideas of right and wrong.

For the moment the researchers who are investing so much effort into Delphi and systems like it are keeping their goals simple, understanding the complexities of the project. However, as we see more of our actions monitored by corporations and governments around the world, I worry that "ethical systems" like this will be used to seek out those who deviate from a prescribed moral code. China already does something similar to this with their various surveillance programs. The speed at which dissenters can be found and silenced is nothing short of remarkable terrifying and given the recent societal directions of some Western nations, it would not be surprising at all to learn in the near future that some countries are preparing to do the same.